How to access azure k8s cluster without run Invoke Command? - azure

I have an AKS cluster and it's private. I want to access it from my local and i added necessary commands for kubeconfig. Now, i can list pods with invoke command. But i want to access directly like kubectl get pods command. (i dont want do alias)
az aks command invoke \
--resource-group rg-network-spokes \
--name aks_dev_cluster \
--command "kubectl get pods -A"

If your aks cluster is private, it means its controle plane is not exposed on internet and therefore you can not use kubectl to interact with the API without being into the same vnet as your cluster
You have a few options to do so, such as :
Create a VM in the same VNET as your cluster and install kubectl client
Create a VPN to connect your computer on the aks's network
If you are starting with Azure, I would suggest going with the first option as setting up a VPN can be a bit more tedious.

You can download the kubeconfig file in this path "/home/.kube/config" and now you are good to go.
Or, Use Kubernetes Lens to manage from a UI.

Your AKS cluster is private you must be accessing it via VPN right? You can connect to VPN to access the cluster over private network. You can download the kubeconfig via this command.
Pre-req Azure Cli must be installed.
az aks get-credentials --resource-group my-rg --name my-aks --file my-aks-kubeconfig-ss
It will generate a kubeconfig for you with name my-aks-kubeconfig-ss. You can copy this config and paste inside .kube/ folder or your choice. You can access AKS cluster from Lens via UI mode.
Second option is to use lens.
Install lens. After installation lens press ctrl + shift + A and a windows will open asking for kubeconfig. Copy the content from my-aks-kubeconfig-ss and paste it here. Bingo your cluster is added inside Lens.

Related

Load Balancer Service type in azure VM

I have created a Kubernetes cluster ( 1 master, 2 workers VMs) using kubeadm on Azure. Node type service is working as expected. But Load Balancer service type is not working.
I have created the public IP address in azure and attached this IP to the service. I could see IP Address is attached for the service but this IP address is not accessible from outside.
And I have created the load balancer in Azure and attached the load balancer public IP address to the service that I have created in azure. This option also didn't work.
Just curious to know how to configure Load Balancer Service type in azure VM.
I have tried with aks and it worked with out any issues.
• I would suggest you to please follow the steps as given by me below for creating an AKS cluster in Azure and attaching a load balancer to that AKS cluster with a public IP for the front end for it. The steps for doing the said should be as below: -
a) Firstly, execute the below command in Azure CLI in Azure BASH cloud shell. The below creates an AKS cluster with two nodes in it of type ‘Linux’ with a ‘Standard’ load balancer in the said resource group where the ‘VM set type’ should be set as ‘VirtualMachineScaleSets’ with the appropriate version of Kubernetes being specified in it: -
az aks create \
--resource-group <resource group name>\
--name <AKS cluster name> \
--vm-set-type <VMSS or Availability set> \
--node-count <node count> \
--generate-ssh-keys \
--kubernetes-version <version number> \
--load-balancer-sku <basic or standard SKU>
Sample command: -
az aks create \
--resource-group AKSrg \
--name AKStestcluster \
--vm-set-type VirtualMachineScaleSets \
--node-count 2 \
--generate-ssh-keys \
--kubernetes-version 1.16.8 \
--load-balancer-sku standard
I would suggest you to please use the below command to check the installed version of Kubernetes orchestrator in your Azure BASH cloud shell according to the region specified and use the appropriate version in the above command: -
az aks get-versions --location eastus --output table
Then, I would suggest you use the below command for getting credentials of the AKS cluster created: -
az aks get-credentials --resource-group <resource group name> --name <AKS cluster name>
b) Then execute the below command for getting the information of the created nodes: -
kubectl get nodes
Once the information is fetched, then load the appropriate ‘YAML’ files in the AKS cluster and apply them to be run as an application on them. Then, check the service state as below: -
kubectl get service <application service name> --watch
c) Then press ‘Ctrl+C’, after noting the public IP address of the load balancer. Then execute the below command for setting the managed outbound public IP for the AKS cluster and the configured load balancer: -
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--load-balancer-managed-outbound-ip-count 1 ’
This will ensure that the services running in the back end will have only one public IP in the front end. In this way, you will be able to create an AKS cluster with load balancer having a public IP address.

Unable to connect to the server: dial tcp: lookup <Server Location>: no such host

I'm beginning to build out a kubernetes cluster for our applications. We are using Azure for cloud services, so my K8s cluster is built using AKS. The AKs cluster was created using the portal interface for Azure. It has one node, and I am attempting to create a pod with a single container to deploy to the node. Where I am stuck currently is trying to connect to the AKS cluster from Powershell.
The steps I have taken are:
az login (followed by logging in)
az account set --subscription <subscription id>
az aks get-credentials --name <cluster name> --resource-group <resource group name>
kubectl get nodes
After entering the last line, I am left with the error: Unable to connect to the server: dial tcp: lookup : no such host
I've also gone down a few other rabbit holes found on SO and other forums, but quite honestly, I'm looking for a straight forward way to access my cluster before complicating it further.
Edit: So in the end, I deleted the resource I was working with and spun up a new version of AKS, and am now having no trouble connecting. Thanks for the suggestions though!
As of now, the aks run command adds a fourth option to connect to private clusters extending #Darius's three options posted earlier:
Use the AKS Run Command feature.
Below are some copy/paste excerpts of a simple command, and one that requires a file. It is possible to chain multiple commands with &&.
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl get pods -n kube-system"
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl apply -f deployment.yaml -n default" \
--file deployment.yaml
In case you get a (ResourceGroupNotFound) error, try adding the subscription, too
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--subscription <subscription> \
--command "kubectl get pods -n kube-system"
You can also configure the default subscription:
az account set -s <subscription>
Unable to connect to the server: dial tcp: lookup : no such host
The error is coming because of private cluster. The Private Cluster option is enabled while creating the AKS cluster. You need to disable this option.
Kubectl is a kubernetes control client. It is an external connectivity provider to connect with our kubernetes cluster. We can't connect with the private cluster externally.
Believe me.... just disable the private cluster options And see your success. Thank you.
Note: We can't disable this option after the cluster creation. you need to delete the cluster and again reform it.
Posting this as Community Wiki for better visibility.
Solution provided by OP:
Delete resource and spun up a new version of AKS.
For details, you can check docs Create a resource group, Create AKS cluster and resource create.
Next try worth to try:
kubectl config use-context <cluster-name>
as it was proposed in similar Github issue.
Gaurav's answer pretty much sums it up. In fact you can refer to the documentation which states that
The API server endpoint has no public IP address. To manage the API
server, you'll need to use a VM that has access to the AKS cluster's
Azure Virtual Network (VNet). There are several options for
establishing network connectivity to the private cluster.
To connect to a private cluster, there are only 3 methods:
Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
Use a VM in a separate network and set up Virtual network peering. See the section below for more information on this option.
Use an Express Route or VPN connection.
It is more convenient to use Az module from desktop Powershell for any management operation with Azure portal. Microsoft adds a lot of new cmdlets for managing AKS and Service Fabric clusters.
Please take a look Az.Aks
In your case:
Connect-AzAccount
Get-AzAksNodePool
I was also facing the issue, I'm using a private cluster and I have a machine (bastion) in a different vnet with peering enabled but still, I was not able to connect the cluster (I was able to SSH and telnet to the machine).
Then I added a virtual network link in the private DNS zone for the vnet where the bastion host resides. It worked for me, I'm able to access the cluster.
When using a private cluster, the kubernetes api-endpoint is only accessible on the cluster's VNet. Connecting via VPN unfortunately does not work painlessly since the azure private DNS will not be available via for VPN clients (yet).
However, it is possible to connect kubectl directly to the IP-address of the api-endpoint, but that will require you to ignore certificate errors since we are using the IP directly.
If you edit your .kube/config and change the server address to the IP number. Then call kubectl with something like this
kubectl get all --all-namespaces --insecure-skip-tls-verify
Usually, this is all that is required to connect. Check whether firewall is not blocking any traffic. Also, verify subscription id and other identifiers again and make sure you are using the correct values. If the issue still persists, I recommend you ask azure support to help you out.
I had the same issues when running the kubectl command from jenkins. For me it was the permission issues of ~/.kube/config I gave it access to jenkins as well which solved the issue for me.
You can run kubectl commands on a private AKS cluster using az aks command invoke. Refer to this for more info.
As for why you might want to run private AKS clusters, read this
You can simply append "--admin" to the query as seen below.
az aks get-credentials --name <cluster name> --resource-group <resource group name> --admin
I also hit this after restarting my kubernetes cluster, but it turned out I was just not waiting long enough, after about 10 minutes the "kubectrl" commands started working again.
If you are using AWS with kops then this might help you
mkdir autoscaler
cd autoscaler/
git clone https://github.com/kubernetes/autoscaler.git
create a file called ig-policy.json with the contents
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
Then you need to create iam policy
aws iam create-policy --policy-name ig-policy --policy-document file://ig-policy.json
And attach the above create iam policy with the user id to the cluster name
aws iam attach-role-policy --policy-arn arn:aws:iam::005935423478650:policy/ig-policy --role-name nodes.testing.k8s.local
Then update the cluster
kops update cluster testing.k8s.local --yes
Then run
kops rolling-update cluster
Creating private not easy journey, but it has beautiful views so I encourage anyone to get there.
I did it all in terraform, so some names can be little different than they are in portal/azure CLI.
And this is how I did it:
Private DNS zone, with name as privatelink.westeurope.azmk8s.io
VNET where AKS will be placed (let's call it vnet-access)
Virtual network from which you want to access AKS
Private AKS (private_dns_zone_id set to dns zone form first point)
Virtual network link (in private DNS zone, pointing to VNET from point 3)
Peering between networks from points 2 and 3.
This should allow any machine in vnet-access to firstly resolve DNS, and then - to access cluster...
Yet... if you want to get there from your local machine, this is another setup. Fortunately Microsoft have such tutorial here
If you find that something is still not working - put the error in comment and I'll try to adapt my answer to cover this.
For me I had this issue when I was trying to connect a new Linux user to my Elastic Kubernetes Cluster in AWS.
I setup a new user called jenkins-user, then I tried to run the command below to get pods:
kubectl get pods
And then I will run into the error below:
Unable to connect to the server: dial tcp: lookup 23343445ADFEHGROGMFDFMG.sk1.eu-east-2.eks.amazonaws.com on 198.74.83.506:53: no such host
Here's how I solved it:
The issue was because I had not set the context for the Kubernetes cluster in the kube config file of the new linux user (jenkins-user).
All I had to do was either first install the aws-cli for this new user (install it into the home directory of this new user). And then run the command aws configure to configure the necessary credentials. Although, since I already had the aws-cli setup for the other users on the Linux system I simply copied the ~/.aws directory from an already existing user to the jenkins-user home directory using the command:
sudo cp -R /home/existing-user/.aws/ /home/jenkins-user/
Next, I had to create a context for the Kubernetes configuration which will create a new ~/.kube/config file for the jenkins-user using the command below:
aws eks --region my-aws-region update-kubeconfig --name my-cluster-name
Next, I checked the kube config file to confirm that my context has been added using the command:
sudo nano /.kube/config
This time when I ran the command below, it was successful:
kubectl get pods
Resources: Create a kubeconfig for Amazon EKS
I faced the same issue and resolved it by deleting .kube folder which was under the following path C:\Users\<your_username> and then restarting kubernetes cluster.

Can't pull image from private Azure Container Registry when using Az CLI to create a new Azure Container Instance

I've created a service principal with push and pull access to/from my private Azure Container Registry. Pushing to ACR works perfectly fine with the following command:
az login --service-principal -u "someSpID" -p "someSpSecret" --tenant "someTenantID"
az acr login --name "someRegistry"
docker push "someRegistry.azurecr.io/my-image:0.0.1"
And I am also able to pull the image directly with the following command:
docker pull "someRegistry.azurecr.io/my-image:0.0.1"
I want to deploy a container instance into a private subnet and I've configured the network security to allow access for my said subnet.
However, when I attempt to deploy a container instance with the following command into my private subnet, where I specified the same service principal which I had previously logged in with, I get an error response.
az container create \
--name myContainerGroup \
--resource-group myResourceGroup \
--image "someRegistry.azurecr.io/my-image:0.0.1" \
--os-type Linux \
--protocol TCP \
--registry-login-server someRegistry.azurecr.io \
--registry-password someSpSecret \
--registry-username someSpID \
--vnet someVNET \
--subnet someSubnet \
--location someLocation \
--ip-address Private
Error:
urllib3.connectionpool : Starting new HTTPS connection (1): management.azure.com:443
urllib3.connectionpool : https://management.azure.com:443 "PUT /subscriptions/mySubscription/resourceGroups/myResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/myContainerGroup?api-version=2018-10-01 HTTP/1.1" 400
msrest.http_logger : Response status: 400
The image 'someRegistry.azurecr.io/my-image:0.0.1' in container group 'myContainerGroup' is not accessible. Please check the image and registry credential.
The same error ensues when I try and deploy the container instance through Azure Portal.
When I tried deploying a public image into the same subnet, it succeeds fine so it isn't a deployment permission issue, neither does it seem to be wrong service principal credentials as the docker pull "someRegistry.azurecr.io/my-image:0.0.1" works just fine. I can't quite wrap my head around this inconsistent behavior. Ideas anyone?
For your issue, here is a possible reason to explain the error you got. Let's look at the limitation describe here:
Only an Azure Kubernetes Service cluster or Azure virtual machine can
be used as a host to access a container registry in a virtual network.
Other Azure services including Azure Container Instances aren't
currently supported.
This limitation shows the firewall of the Azure Container Registry does not support the Azure Container Instance currently. It only supports that pull/push the image in the Azure VM or AKS cluster.
So the solution for you is that change the rules to allow all network and then try again. Or use the AKS cluster, but it will also cost more.

Azure container service not creating the agent nodes

I have been working on the deployment of windows container from Azure Container Registry to Azure Container Service with Kubernetes Orchestra it was working fine previously.
Now I'm trying to create an acs kubernetes cluster of windows but the create command is only creating a master node and while deploying I'm getting the following error No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
I have followed this link https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough to create the windows based kubernetes cluster.
This is the command I have used to create the cluster
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12
As per the above documentation, the above command should create a cluster named myK8sCluster with one Linux master node and two Windows agent nodes.
To verify the creation of cluster I have used the below command
kubectl get nodes
NAME STATUS AGE VERSION
k8s-master-98dc3136-0 Ready 5m v1.7.7
According to the above command, it shows that it created only the Linux master node, not the two windows agent nodes.
But in my case I require the windows agent nodes to deploy a windows based container in the cluster.
So I assume that due this I'm getting the following error while deploying No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
As the documentation points out, ACS with a target of Kubernetes is deprecated. You want to use AKS (Azure Kubernetes as a Service).
To go about this, start here: https://learn.microsoft.com/en-us/azure/aks/windows-container-cli
Make sure you have the latest version of the CLI installed on your machine if you choose to do it locally, or use the Azure Cloud Shell.
Follow the guide on the rest of the steps as it will walk you through the commands.
For your issue, as I know the possible reason is that you need to enable the WindowsPreview feather. You can have a check through the CLI command like this:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"
When it's OK, you also need to pay attention to the Kubernetes version. When I use the command that you have used, then the windows nodes are created successfully, but it just shows the master when I execute the command kubectl get nodes. Even if I can see the windows node in the group.
Then I try the command with additional parameter --orchestrator-version and set the value as 1.12.7 and the whole command like below:
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12 \
--orchestrator-version \
--location westcentralus
Then it works well and the command kubectl get nodes -o wide show like below:
But as you know, the ACS will be deprecated. So I would suggest you use the AKS with Windows node in the preview version. Or you can use the aks-engine as the AKS Engine is the next version of the ACS-Engine project.

Kubernetes: Unable to access to kubernetes dashboard

I add bitnami.bitnami/rabbitmq into my acr.
In VSO release pipeline, I add 2 tasks kubectl run & expose, looks like below.
kubectl run rabbitmq --image xxxxxx.azurecr.io/bitnami.bitnami/rabbitmq:3.7.7 --port=15672
kubectl expose deployment rabbitmq --type=LoadBalancer --port=15672 --target-port=15672
After save and release it, everything is successful, but now I can't proxy into my dashboard using
az aks browse -g {groupname} -n {k8sname}
When I remove the above 2 task in my release, I able to connect to my dashboard.
Can someone explain to me what going wrong, how to troubleshoot it.
You can check if the pods work well in your Azure Kubernetes Cluster. If everything is OK. Then you should make sure that if your current OS has the browser. The command az aks browse -g {groupname} -n {k8sname} need a browser to open the dashboard where it executes.
You can open the k8s dashboard in another OS with the command you posted after you get the credential with the command az aks get-credentials -g {groupname} -n {k8sname}. Of curse, you need to execute az login first.
If things above all are OK, you could try this link.

Resources