I'm currently having some issues to sign in to a private AKS Cluster with the following commands:
az account set --subscription [subscription_id]
az aks get-credentials --resource-group [resource-group] --name [AKS_cluster_name]
After I typed those two commands it ask me to authenticate through the web with a code that is generated by AZ CLI, and after that, I have the following issue on the terminal:
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code RTEEREDTE to authenticate.
Unable to connect to the server: dial tcp: lookup aksdusw2aks01-0581cf8f.hcp.westus2.azmk8s.io: i/o timeout
What could be the potential issue? How can I successfully login to a private AKS Cluster?
Notes:
I have some other clusters and I'm able to login to them through the terminal without having any type or kind of errors.
You cant use kubectl to access the API Server of a private AKS cluster, thats the design by making it private (no public access). You will need to use az aks command invoke to invoke commands through the Azure API:
az aks command invoke -n <CLUSTER_NAME> -g <CLUSTER_RG> -c "kubectl get pods -A"
Timeouts typically mean something somewhere is dropping packets and there is no response. This might be the security policies and/or traffic rules inside your Azure cloud that are configured for your AKS cluster. You can double check this is coherent with the one where you say your cluster is accessible.
In order to interact with the private cluster you'll need to run your command lines from endpoint that has access to the VNET that the AKS cluster is in. So you'll need a VM in that VNET, or a VNET that is peered, VPN in etc.
The point of the private cluster is to prevent access from external sources, only connected networks are allowed.
You can also, as mentioned by Wlez, use command invoke, but this is probably suited to occasional use, rather than responsive, frequent access.
Related
I have AzureEventHub setup and its opened for a selected network, which means, only the IPs that are whitelisted can be accessible.
On the other end, i have AzureKubernetesService configured to read messages from AzureEventHub. I get connection error saying broker not available, because the IPs of kubernetes is not whitelisted in eventhub.
Question is : Where would i get the IPs of AKS that can be configured in my AzureEventHub ?
What is already tried : In Overview of AKS Cluster, we have certain IPs as below.
Pod CIDR
Service CIDR
DNS service IP
Docker bridge CIDR
Adding the above isn't working !!!
You need to register the Azure Kubernetes Service (AKS) in Azure Event Grid. This can be done either through CLI or Powershell.
There is a set of commands you need to perform to complete the task:
Enable the Event Grid resource provider
Create an AKS cluster using the az aks create command
Create a namespace and event hub using az eventhubs namespace create and az eventhubs eventhub create
Refer the official document to implement by using the given examples
Notice that when I create a Azure Kubernetes, by default it creates the API with an *.azmk8s.io FQDN and it is external facing.
Is there a way to create with local IP instead? If "yes", can this be protected by NSG and Virtual Network to limit connections coming via Jump Server?
If there is any drawback creating to only allow internal IP?
Below is the command I used to create:-
az aks create -g [resourceGroup] -n
[ClusterName] --windows-admin-password [SomePassword]
--windows-admin-username [SomeUserName] --location [Location] --generate-ssh-keys -c 1 --enable-vmss --kubernetes-version 1.14.8 --network-plugin azure
Anyone tried https://learn.microsoft.com/en-us/azure/aks/private-clusters? If that still allows external facing app but private management API?
why not? only the control plane endpoint is different. in all other regards - its a regular AKS cluster.
In a private cluster, the control plane or API server has internal IP
addresses that are defined in the RFC1918 - Address Allocation for
Private Internets document. By using a private cluster, you can ensure
that network traffic between your API server and your node pools
remains on the private network only.
this outlines how to connect to private cluster with kubectl. NSG should work as usual
KOPS lets us create a Kubernetes cluster along with a bastion that has ssh access to the cluster nodes
With this setup is it still considered safe to use kubectl to interact with the Kubernetes API server?
kubectl can also be used to interact with shell on the pods? Does this need any restrictions?
What are the precautionary steps that need to be taken if any?
Should the Kubernetes API server also be made accessible only through the bastion?
Deploying a Kubernetes cluster with the default Kops settings isn’t secure at all and shouldn’t be used in production as such. There are multiple configuration settings that can be done using kops edit command. Following points should be considered after creating a Kubnertes Cluster via Kops:
Cluster Nodes in Private Subnets (existing private subnets can be specified using --subnets with the latest version of kops)
Private API LoadBalancer (--api-loadbalancer-type internal)
Restrict API Loadbalancer to certain private IP range (--admin-access 10.xx.xx.xx/24)
Restrict SSH access to Cluster Node to particular IP (--ssh-access xx.xx.xx.xx/32)
Hardened Image can also be provisioned as Cluster Nodes (--image )
Authorization level must be RBAC. With latest Kubernetes version, RBAC is enabled by default.
The Audit logs can be enabled via configuration in Kops edit cluster.
kubeAPIServer:
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/audit.yaml
Kops provides reasonable defaults, so the simple answer is : it is reasonably safe to use kops provisioned infrastructure as is after provisioning.
I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.
The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.
Any idea? Thank you in advance.
The problem I am facing is that I don't know how to ssh this agent
server.
Do you mean you create AKS and can't find master VM?
If I understand it correctly, that is a by design behavior, AKS does not provide direct access (Such as with SSH) to the cluster.
If you want to SSH to the agent node, as a workaround, we can create a public IP address and associate this public IP address to the agent's NIC, then we can SSH to this agent.
Here are my steps:
1.Create Public IP address via Azure portal:
2.Associate the public IP address to the agent VM's NIC:
3.SSH to this VM with this public IP address:
Note:
By default, we can find ssh key when we try to create AKS, like this:
Basically, you don't even have to create a public IP to that node. Simply add public ssh key to the desired node with Azure CLI:
az vm user update --resource-group <NODE_RG> --name <NODE_NAME> --username azureuser --ssh-key-value ~/.ssh/id_rsa.pub
Then run temporary pod with (Don't forget to switch to the desired namespace in kubernetes config):
kubectl run -it --rm aks-ssh --image=debian
Copy private ssh key to that pod:
kubectl cp ~/.ssh/id_rsa <POD_NAME>:/id_rsa
Finally, connect to the AKS node from pod to private IP:
ssh -i id_rsa azureuser#<NODE_PRIVATE_IP>
In this way, you don't have to pay for Public IP and in addition, this is good from security perspective.
The easiest way is to use the below, this will create a tiny priv pod on the node and access the node using nsenter.
https://github.com/mohatb/kubectl-wls
I created an azure kubernetes cluster via "az acs create".
I exec an existing POD
kubectl exec -it mypod /bin/bash
and make a curl http://myexternlip.com/raw
The IP I get is the public ip address of the k8s-agents... So far so good.
Now I create an azure kubernetes cluster via "acs-engine".
Make the same "exec" as above-mentioned...
Now I can't find the IP in any azure component. Neither in the agents, nor in the load balancers.
Where is this IP configured?
Regards,
saromba
This should have the information you're looking for:
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-load-balancing
In short, when you expose a service with type=loadbalancer, ACS creates a Public IP address resource that K8S uses.