Unable to connect with Azure Container Services - Kubernetes - azure

I am working on setting up environment for deploying microservices.
I have gotten as far as building my code and deploying to a registry but having problem running it in Azure Container Services.
I am following this guide to connect to ACS: https://learn.microsoft.com/en-us/azure/container-service/container-service-connect
But i fail on the step: Download Cluster Credentials
Using the given command
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
Ofc changing the reseource group and clustername to the correct names from my portal. I get an error:
[WinError 10049] The requested address is not valid in its context
(if i change resource group or clustername to something else I get other errors so seems it can find those at least)
When i try to search for the error it seems to be some IP adress problem but can't figure out what to do. Tried running same command from other network (from home) to make sure work firewall is not blocking something.. but I get the same error
Any help appriciated!

This command copy the cluster credentials to your machine. Background processes are ssh to your cluster VM and copy the credentials.
So, you should ensure you could ssh to the master VM manual. If you could not ssh to master VM manual, az command also could not do it. You could get your master-dns-name on Azure Portal.
ssh -i id_rsa <user>#<master-dns-name>
Notes: If az command does not work and you could ssh to master VM, you could download credentials to your machine. They are same. You could check your link about this.
You also need check your azure cli version. You could use the following commands
az --version
My version is 2.02. It works for me.

Related

Azure CLI - AKS Communication Issue

i have problem in executing Kubectl commands, its errors out with 502 certificate error
Unable to connect to the server: x509: certificate signed by unknown authority
I can able to login to az login after that I'm connecting with my AKS cluster by using below command
az aks get-credentials --resource-group sitecore10.x-dev-k8s --name sitecore102-Dev-AKS-v1 --overwrite-existingenter code here
After that executing Kubectl get pods or Kubectl get services but it doesn't work
already tried adding environment variables
Opened .kube file and opened the same Url in browser , displayed the below error
Resolved
I got a chance to resolve this issue.
The actual issue is AKS URL (https://AKSInstance.hcp.westus.azmk8s.io:443) is blocked by Company Internet Monitoring Software (Netskope).
i raised a support ticket to whitelist the URL https://*.azmk8s.io
How to check the issue :
C:\Users[YourUserName].kube open config file to identify the AKS URL, try opening the URL directly in the browser, if your getting 401 authentication issue then your good, otherwise if your getting error message related to certificate then it should be your Internet Monitoring software is blocking the AKS URL calls.
I tried to reproduce the same in my environment to connect the AKS cluster from a windows machine:
I have created AKS cluster, like below.
Go to Azure Portal > Kubernetes Services > Create.
Download Kubectl tool here and install the same on a windows machine, like below.
open powershell in CMD and navigate to download folder and run kubectl.exe , like below.
connect your cluster using cloud shell to download .kube.config file. like below.
once connected to the cluster, download the config file to the Local windows machine.
Create a folder with .kube name in your username folder and place the downloaded config file in the same folder.
Path C:\Users\yourusername
Now run kubectl cmd to get the AKS cluster details from the windows machine, like below.
Reference: kubectl unable to connect to server: x509: certificate signed by unknown authority by bherto39.

Unable to connect to the server: dial tcp: lookup <Server Location>: no such host

I'm beginning to build out a kubernetes cluster for our applications. We are using Azure for cloud services, so my K8s cluster is built using AKS. The AKs cluster was created using the portal interface for Azure. It has one node, and I am attempting to create a pod with a single container to deploy to the node. Where I am stuck currently is trying to connect to the AKS cluster from Powershell.
The steps I have taken are:
az login (followed by logging in)
az account set --subscription <subscription id>
az aks get-credentials --name <cluster name> --resource-group <resource group name>
kubectl get nodes
After entering the last line, I am left with the error: Unable to connect to the server: dial tcp: lookup : no such host
I've also gone down a few other rabbit holes found on SO and other forums, but quite honestly, I'm looking for a straight forward way to access my cluster before complicating it further.
Edit: So in the end, I deleted the resource I was working with and spun up a new version of AKS, and am now having no trouble connecting. Thanks for the suggestions though!
As of now, the aks run command adds a fourth option to connect to private clusters extending #Darius's three options posted earlier:
Use the AKS Run Command feature.
Below are some copy/paste excerpts of a simple command, and one that requires a file. It is possible to chain multiple commands with &&.
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl get pods -n kube-system"
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl apply -f deployment.yaml -n default" \
--file deployment.yaml
In case you get a (ResourceGroupNotFound) error, try adding the subscription, too
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--subscription <subscription> \
--command "kubectl get pods -n kube-system"
You can also configure the default subscription:
az account set -s <subscription>
Unable to connect to the server: dial tcp: lookup : no such host
The error is coming because of private cluster. The Private Cluster option is enabled while creating the AKS cluster. You need to disable this option.
Kubectl is a kubernetes control client. It is an external connectivity provider to connect with our kubernetes cluster. We can't connect with the private cluster externally.
Believe me.... just disable the private cluster options And see your success. Thank you.
Note: We can't disable this option after the cluster creation. you need to delete the cluster and again reform it.
Posting this as Community Wiki for better visibility.
Solution provided by OP:
Delete resource and spun up a new version of AKS.
For details, you can check docs Create a resource group, Create AKS cluster and resource create.
Next try worth to try:
kubectl config use-context <cluster-name>
as it was proposed in similar Github issue.
Gaurav's answer pretty much sums it up. In fact you can refer to the documentation which states that
The API server endpoint has no public IP address. To manage the API
server, you'll need to use a VM that has access to the AKS cluster's
Azure Virtual Network (VNet). There are several options for
establishing network connectivity to the private cluster.
To connect to a private cluster, there are only 3 methods:
Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
Use a VM in a separate network and set up Virtual network peering. See the section below for more information on this option.
Use an Express Route or VPN connection.
It is more convenient to use Az module from desktop Powershell for any management operation with Azure portal. Microsoft adds a lot of new cmdlets for managing AKS and Service Fabric clusters.
Please take a look Az.Aks
In your case:
Connect-AzAccount
Get-AzAksNodePool
I was also facing the issue, I'm using a private cluster and I have a machine (bastion) in a different vnet with peering enabled but still, I was not able to connect the cluster (I was able to SSH and telnet to the machine).
Then I added a virtual network link in the private DNS zone for the vnet where the bastion host resides. It worked for me, I'm able to access the cluster.
When using a private cluster, the kubernetes api-endpoint is only accessible on the cluster's VNet. Connecting via VPN unfortunately does not work painlessly since the azure private DNS will not be available via for VPN clients (yet).
However, it is possible to connect kubectl directly to the IP-address of the api-endpoint, but that will require you to ignore certificate errors since we are using the IP directly.
If you edit your .kube/config and change the server address to the IP number. Then call kubectl with something like this
kubectl get all --all-namespaces --insecure-skip-tls-verify
Usually, this is all that is required to connect. Check whether firewall is not blocking any traffic. Also, verify subscription id and other identifiers again and make sure you are using the correct values. If the issue still persists, I recommend you ask azure support to help you out.
I had the same issues when running the kubectl command from jenkins. For me it was the permission issues of ~/.kube/config I gave it access to jenkins as well which solved the issue for me.
You can run kubectl commands on a private AKS cluster using az aks command invoke. Refer to this for more info.
As for why you might want to run private AKS clusters, read this
You can simply append "--admin" to the query as seen below.
az aks get-credentials --name <cluster name> --resource-group <resource group name> --admin
I also hit this after restarting my kubernetes cluster, but it turned out I was just not waiting long enough, after about 10 minutes the "kubectrl" commands started working again.
If you are using AWS with kops then this might help you
mkdir autoscaler
cd autoscaler/
git clone https://github.com/kubernetes/autoscaler.git
create a file called ig-policy.json with the contents
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
Then you need to create iam policy
aws iam create-policy --policy-name ig-policy --policy-document file://ig-policy.json
And attach the above create iam policy with the user id to the cluster name
aws iam attach-role-policy --policy-arn arn:aws:iam::005935423478650:policy/ig-policy --role-name nodes.testing.k8s.local
Then update the cluster
kops update cluster testing.k8s.local --yes
Then run
kops rolling-update cluster
Creating private not easy journey, but it has beautiful views so I encourage anyone to get there.
I did it all in terraform, so some names can be little different than they are in portal/azure CLI.
And this is how I did it:
Private DNS zone, with name as privatelink.westeurope.azmk8s.io
VNET where AKS will be placed (let's call it vnet-access)
Virtual network from which you want to access AKS
Private AKS (private_dns_zone_id set to dns zone form first point)
Virtual network link (in private DNS zone, pointing to VNET from point 3)
Peering between networks from points 2 and 3.
This should allow any machine in vnet-access to firstly resolve DNS, and then - to access cluster...
Yet... if you want to get there from your local machine, this is another setup. Fortunately Microsoft have such tutorial here
If you find that something is still not working - put the error in comment and I'll try to adapt my answer to cover this.
For me I had this issue when I was trying to connect a new Linux user to my Elastic Kubernetes Cluster in AWS.
I setup a new user called jenkins-user, then I tried to run the command below to get pods:
kubectl get pods
And then I will run into the error below:
Unable to connect to the server: dial tcp: lookup 23343445ADFEHGROGMFDFMG.sk1.eu-east-2.eks.amazonaws.com on 198.74.83.506:53: no such host
Here's how I solved it:
The issue was because I had not set the context for the Kubernetes cluster in the kube config file of the new linux user (jenkins-user).
All I had to do was either first install the aws-cli for this new user (install it into the home directory of this new user). And then run the command aws configure to configure the necessary credentials. Although, since I already had the aws-cli setup for the other users on the Linux system I simply copied the ~/.aws directory from an already existing user to the jenkins-user home directory using the command:
sudo cp -R /home/existing-user/.aws/ /home/jenkins-user/
Next, I had to create a context for the Kubernetes configuration which will create a new ~/.kube/config file for the jenkins-user using the command below:
aws eks --region my-aws-region update-kubeconfig --name my-cluster-name
Next, I checked the kube config file to confirm that my context has been added using the command:
sudo nano /.kube/config
This time when I ran the command below, it was successful:
kubectl get pods
Resources: Create a kubeconfig for Amazon EKS
I faced the same issue and resolved it by deleting .kube folder which was under the following path C:\Users\<your_username> and then restarting kubernetes cluster.

azure kubernetes dashboard not configured

I have a cluster configured on azure kubernetes . and the services are working fine.
following this article
https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard
i am trying to view dashboard using but get the error as follows
az aks browse --resource-group DemoRG--name aksdemo2
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
Error: unknown flag: --address
My cluster does not have RBAC enabled , i am unsure if this is related to network issue or something different.
Eventaully issue was resolved by author of this post by following existing github issue #8642:
I had two copies of kubectl and the one from docker was overriding the
one from azure. Found this by firing "where kubectl" from command
prompt, and deleting the docker copy.
Just run kubectl proxy then go to following URL
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default
I used kubectl proxy to access the dashboard

Login to azure container

I used following quick start doc to spin up my first Azure container.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart#feedback
It worked fine. but how do I connect to container if I want to debug something?
You cannot connect to the container itself directly to debug, IE you can't SSH or RDP to it. Take a look at this graphic which highlights how a container differs from virtual machines:
You can however pull logs from your container from the container engine. In your case you would want to use the following command in the Azure CLI: az container logs.
https://aka.ms/container_logs
When you invoke CLI through the Portal, you should already be connected through your subscription.To debug or troubleshoot you can look at the container logs. Check out this documentation for the exact commands
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-logs
When I am building containers to run on ACI, I build them first in a local docker instance where they can be connected to and interactively debugged. When you're happy with how they run locally push them into ACI, and debug from the output logs if needed.
I get to the bash shell in my Azure containers by either the azure-cli package, as the OP noted in a comment:
az container exec --exec-command "/bin/bash"
Or by navigating to a container instance in the Azure portal, then under Settings/Containers there is a "Connect" tab:

Could not connect to VM created with Azure command line tools

I am trying to use the Azure Command Line Tools (http://www.windowsazure.com/en-us/manage/linux/how-to-guides/command-line-tools/) to create an Ubuntu 12.04 VM.
I am issuing the following commands:
azure vm create xxxxxxxxxx.cloudapp.net b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_1-LTS-amd64-server-20121218-en-us-30GB azureuser mypassword --location "West Europe"
azure vm endpoint create xxxxxxxxxx 22 22
azure vm start xxxxxxxxxx
This seems to create and start the VM successfully.
I try to connect via SSH to the VM using the following command (on Mac OS X)
ssh azureuser#xxxxxxxxxx.cloudapp.net
However, when I try to SSH into the VM, it seems that password authentication is disabled on the VM as I am getting the following error:
Permission denied (publickey).
I would like to add that connecting via SSH to an Ubuntu VM created trough the Azure Management portal works absolutely fine. This issue only appears when the VM was created through the Azure command line tools.
Has anybody encountered a similar issue and knows how to solve it?
You need to use the --ssh switch on your azure vm create command to enable ssh. Adding the endpoint has no effect.
According to the Windows Azure command-line tool for Mac and Linux documentation you can only add ssh connectivity via the azure cli when the virtual machine is created.

Resources