Kubernetes install on ACS incomplete? - azure

I have tried three installs through the portal and finally one completely through Azure CLI (with Ubuntu on Windows). Each time the deployment completes but remotely run commands because they fail with "resource temporarily unavailable". I can SSH into the master server. When I do I find the kubectl commands all come up empty (nodes, pods, namespace, version). Service --status-all does not list a single Kubernete service (I would expect to see the API service at least).
When creating through the portal I manually created the SPN and verified I could login to Azure with it. During the CLI setup I let the install create the SPN.
I have tried restarting the master but nothing changes.
What am I missing? It is probably something easy but I am spinning my tires.

Related

Azure DevOps Release Pipeline || To sign in, use a web browser to open

I created the aks cluster with azure service principal id and i provided the contributer role according to the subscription and resource group.
For each and every time when i executed the pipeline the sign-in is asking and after i authenticated it is getting the data.
Also the "kubectl get" task is taking more than 30 min and is getting "Kubectl Server Version: Could not find kubectl server version"
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CRA2XssWEXUUA to authenticate
Thanks in advance
What is the version of the created cluster?
I'm assuming from your snapshot that you are using az in order to get credentials for it.
Old azure auth plugin is deprecated in V1.22+. If you are using V1.22 or above you should use kubelogin in order authenticate.
You will also need to update your kube config accordingly:
kubelogin convert-kubeconfig
and specifically if you're logging via az:
kubelogin convert-kubeconfig -l azurecli
Note that the flag -l azurecli is important here: the default value is "devicecode" which will not consider your az as a logging method - and you will still be requested a browser authentication.
Alternatively, you can set environment variable:
AAD_LOGIN_METHOD=azurecli
Because you are getting sign in request and not the deprecation warning for the auth plugin, I suspect that you already have kubelogin installed on your agent, and you just need to update the kube config file
What task are you using? There is official kubectl task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops
It requires the service connection.
If you still want to execute kubectl directly, you should run the following before the kubectl inside the AzureCLI task:
az aks get-credentials --resource-group "$(resourceGroup)" --name "$(k8sName)" --overwrite-existing
Please use Selfhosted agents for executing your commands. looks like you have private endpoints for your AKS and requests are only allowed from trusted devices.
I ran into the same issue and for me the fix was to change the Connection Type in the stage definition from Azure Resource Manager to Kubernetes Service Connection - check on the screenshot below.
Then you should be able to also specify the connection type in each of the tasks where you are running kubectl or helm commands. For example, in a kubectl task, under Kubernetes Cluster --> Service connection type use the Kubernetes Service Connection:
As mentioned by #DevOpsEngg, the problem could be related to private endpoints but I wouldn't say that it is regarding selfhosted agents, because I'm using these. As an extra comment - this started happening when I added more than one user to the cluster, so you might want to check user permissions and authentication. Unfortunately, I'm still getting used to K8s so I don't have more info about that.

Azure Devops: installing a Windows Service

I am trying to automate installing windows service using Azure DevOps pipeline. I installed Windows Service Manager from here: https://marketplace.visualstudio.com/items?itemName=MDSolutions.WindowsServiceManagerWindowsServiceManager and added it to the pipeline as a task. The windows service should be installed on the virtual machine where the pipeline is, so I provided "LocalSystem" as Run As Username, and nothing for password. The service was not installed with the following error:
Service ' (MyServiceName)' cannot be created due to the following error: The account name is invalid or does not exist, or the password is invalid for the account name specified
I tried also the credentials I use to get to the virtual machine, but it gave the same error. How can this be solved?
Added:
The service can be installed without problems using installutil.
Azure Devops: installing a Windows Service
You could try to use deployment groups to test, if you are using the private agent:
As the document state:
Service Name - The name of the Windows Service installed on the Deployment Group Target.
You could also refer to the similar thread for some more details.

AZ CLI login using Service Principal fails from specific computer

I have posted previously about az login using a Service Principal failing with the error No subscriptions found and I have run across others that have had similar issues. Capability seems shaky for some reason. What I am seeing now that has me scratching my head is when I run a script I have that does an az login with a service principal from my desktop computer it works fine...no issues. When I run the same script from my laptop, the login fails with the No subscriptions found error. What I have tried on the laptop:
Checked AZ CLI version...same as desktop
Ran az account clear to make sure everything was cleared out
Deleted Service Principal from AAD and recreated from laptop
I even ran az account clear on my desktop to make sure it was not working simply because it was cached and even after the clear, the az login worked fine.
Any thoughts on what might be causing this?
You need to assign a Role to the Service Principle, or add the flag --allow-no-subscriptions.
I have posted resolution for this under following thread.
https://stackoverflow.com/a/66108965/1712969
you might want to try command with this flag "--allow-no-subscriptions"

How to deploy pgadmin4 docker image on azure web app?

I am unable to run docker image dpage/pgadmin4 on azure web app (Linux) which is available on docker hub.
I have installed Docker in my Linux machine and was able to run that docker image locally. Then I created Web app in Azure with options as given below:
OS: Linux
Publish: Docker Image
App service plan: Linux app service
After creating web app, I added two env variables in App Settings section:
PGADMIN_DEFAULT_EMAIL : user#domain.com
PGADMIN_DEFAULT_PASSWORD : SuperSecret
Finally login screen is visible but when I enter above credentials, it doesn't work and keeps redirecting to login page.
Update: If login is working properly, screen appears as shown below.
!(pgadmin initial screen)
After several retries i once got an message (CSRF token invalid) displayed in the right-top corner of the login screen.
For CSRF to properly work there must be some serverside state? So I activated the "ARR affinity" in the "General Settings" on the azure "Configuration".
I also noticed in the explamples on documentation the two environment-variables PGADMIN_CONFIG_CONSOLE_LOG_LEVEL (which is in the example set to '10') and PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION (which is in the example set to 'True').
After enabling "ARR" and setting PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION to False the login started to work. I have no idea what PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION is actually doing, so please take that with caution.
If thats not working for you, maybe setting PGADMIN_CONFIG_CONSOLE_LOG_LEVEL to 10 and enabling console debug logging can give you a clue whats happening.
For your issue, I do the test and find that it's really a strange thing. When I deploy the docker image dpage/pgadmin4 in Azure service Web App for Container through Azure CLI and set the app settings, there is no problem to log in with the user and password. But when I deploy it through the Azure portal, then I meet the same thing with you.
Not sure what is the reason, but the solution is that set the environment variables PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD through the Azure CLI like below:
az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PGADMIN_DEFAULT_EMAIL="user#domain.com" PGADMIN_DEFAULT_PASSWORD="SuperSecret"
If you really want to know the reason, then you can make feedback to Microsoft. Maybe it's a bug or some special settings.
Update
The screenshot of the test on my side here:

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Resources