How do I set a valid ServiceDnsName in my docker-compose file when deploying to Service Fabric? - azure

I am attempting to configure CI/CD in Visual Studio Team Services to a Service Fabric cluster from a docker-compose file, but in the last step of my release definition, using the step "Service Fabric Compose Deploy", I'm getting the error: ##[error]The ServiceDnsName for DefaultService 'myapp' is invalid.. The guide I'm using has a section that says:
If the service name that you specify in a Compose file is a fully
qualified domain name (that is, it contains a dot [.]), the DNS name
registered by Service Fabric is <ServiceName> (including the dot). If
not, each path segment in the application name becomes a domain label
in the service DNS name, with the first path segment becoming the
top-level domain label.
For example, if the specified application name is
fabric:/SampleApp/MyComposeApp, <ServiceName>.MyComposeApp.SampleApp
would be the registered DNS name.
My current docker-compose file is:
version: '3'
services:
myapp:
image: myapp
build:
context: .\myapp
dockerfile: Dockerfile
In the VSTS definition, the Application Name is set as fabric:/myapp.
How do I fix this? Ideally I would like my app to be accessible at yourendpointhere.eastus.cloudapp.azure.com/myapp. Is this possible?
Here's the log entry for the release step failure:
2018-02-14T18:30:00.8056376Z ##[section]Starting: Deploy docker-compose application to a Service Fabric cluster
2018-02-14T18:30:00.8060539Z ==============================================================================
2018-02-14T18:30:00.8060913Z Task : Service Fabric Compose Deploy
2018-02-14T18:30:00.8061507Z Description : Deploy a docker-compose application to a Service Fabric cluster.
2018-02-14T18:30:00.8061986Z Version : 0.2.3
2018-02-14T18:30:00.8062279Z Author : Microsoft Corporation
2018-02-14T18:30:00.8063154Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=847030)
2018-02-14T18:30:00.8063536Z ==============================================================================
2018-02-14T18:30:07.8698092Z Searching for path: D:\a\r1\a\**\docker-compose.yml
2018-02-14T18:30:07.8725249Z Found path: D:\a\r1\a\Drop\docker-compose\docker-compose.yml
2018-02-14T18:30:07.9023293Z Checking compose file
2018-02-14T18:30:16.8571892Z ##[warning]The Docker compose file contains the following 'keys' which are not supported. They will be ignored.
'build'
2018-02-14T18:30:17.1220353Z Imported cluster client certificate with thumbprint 'THUMBPRINT_REDACTED'.
2018-02-14T18:30:17.1899791Z
2018-02-14T18:30:17.1907044Z Thumbprint Subject
2018-02-14T18:30:17.1908148Z ---------- -------
2018-02-14T18:30:17.1909375Z THUMBPRINT_REDACTED CN=eastus.cloudapp.azure.com
2018-02-14T18:30:23.8293246Z Successfully connected to cluster.
2018-02-14T18:30:23.8391402Z Encrypting the password with the Server Certificate.
2018-02-14T18:30:25.1024103Z ##[warning]The cluster's server certificate with thumbprint '********' is required in order to encrypt text but the certificate could not be found on the agent machine in the 'CurrentUser\My' certificate store location.
2018-02-14T18:30:25.7091733Z Creating application
2018-02-14T18:30:29.7025507Z ##[error]The ServiceDnsName for DefaultService 'myapp' is invalid.
FileName: D:\SvcFab\IB\131631066259308291\2fve3lf5.5tp\ApplicationManifest.xml
2018-02-14T18:30:29.9330549Z ##[section]Finishing: Deploy docker-compose application to a Service Fabric cluster

With the help of Microsoft support I was able to solve this issue.
The Application name was the problem. If you remove the "fabric:/" part of the application name it worked for me.
It is a documentation error. Microsoft support will update the documentation
At least now the VSTS steps works and is deployed to Service Fabric.
I now get some errors in the Service Fabric itself but that is for another question.

Related

Docker fails to pull the image from within Azure App Service

The Container Setting on the App Service it self look solid:
But the log pane shows errors:
2020-02-11 06:31:40.621 ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-02-11 06:31:41.240 INFO - Stoping site app505-dfpg-qa2-web-eastus2-gateway-apsvc because it failed during startup.
2020-02-11 06:36:05.546 INFO - Starting container for site
2020-02-11 06:36:05.551 INFO - docker run -d -p 9621:8081 --name app505-dfpg-qa2-web-eastus2-gateway-apsvc_0_a9c8277e_msiProxy -e WEBSITE_SITE_NAME=app505-dfpg-qa2-web-eastus2-gateway-apsvc -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=app505-dfpg-qa2-web-eastus2-gateway-apsvc.azurewebsites.net -e WEBSITE_INSTANCE_ID=7d18d5957d129d3dc3a25d7a2c85147ef57f1a6b93910c50eb850417ab59dc56 appsvc/msitokenservice:1904260237
2020-02-11 06:36:05.552 INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2020-02-11 06:36:17.766 INFO - Pulling image: a...cr/gateway:1.0.20042.2
2020-02-11 06:36:17.922 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for a...cr/gateway, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
2020-02-11 06:36:17.923 ERROR - Pulling docker image a...cr/gateway:1.0.20042.2 failed:
2020-02-11 06:36:17.923 INFO - Pulling image from Docker hub: a...cr/gateway:1.0.20042.2
2020-02-11 06:36:18.092 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for a...cr/gateway, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
2020-02-11 06:36:18.094 ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-02-11 06:36:19.062 INFO - Stoping site app505-dfpg-qa2-web-eastus2-gateway-apsvc because it failed during startup.
The Service Principal used to deploy the App Service has AcrPush access to the parent resource group of the container registry:
The setting are present:
I did az login with that service principal and then tried az acr login to the registry. It works fine. So what am I missing here?
EDIT 1
I know the credentials are correct, because I tested them like this:
Where I just copied the values from the app service configuration and pasted on the console. docker has no problem logging in.
It must be something else.
EDIT 2
However, I also get this:
C:\Dayforce\fintech [shelve/terraform ≡]> docker pull a...r/gateway
Using default tag: latest
Error response from daemon: pull access denied for a...r/gateway, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
So, I can login, but not pull. Very strange, because the account is configured to have AcrPush access to the container, which includes AcrPull:
EDIT 3
I was able to pull successfully when using the FQDN for the registry:
I updated the pipeline, but I still get the same errors:
2020-02-11 16:03:50.227 ERROR - Pulling docker image a...r.azurecr.io/gateway:1.0.20042.2 failed:
2020-02-11 16:03:50.228 INFO - Pulling image from Docker hub: a...r.azurecr.io/gateway:1.0.20042.2
2020-02-11 16:03:50.266 ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://a...r.azurecr.io/v2/gateway/manifests/1.0.20042.2: unauthorized: authentication required"}
2020-02-11 16:03:50.269 ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-02-11 16:03:50.853 INFO - Stoping site app505-dfpg-qa2-web-eastus2-gateway-apsvc because it failed during startup.
EDIT 4
The only way that I found working was to enable the Admin User on the ACR and pass its credentials in the DOCKER_... variables instead of credentials of the Service Principal.
This is frustrating, I know the Service Principal can login and pull when ran locally, it is a mystery why it does not work for docker running on an App Service Host. We have another team here which faced the same issue and they have not found any solution, but enable the Admin User.
EDIT 5
The entire process runs as part of the Azure DevOps on-prem release pipeline using a dedicated Service Principal. Let me call it Pod Deploy Service Principal or just SP for short.
Let DOCKER_xyz denote the three app settings controlling the docker running on the App Service host:
DOCKER_REGISTRY_SERVER_URL
DOCKER_REGISTRY_SERVER_USERNAME
DOCKER_REGISTRY_SERVER_PASSWORD
I think we need to distinguish two parts here:
App Service needs to talk to the ACR in order to pull from it the details about the image and present them in this GUI - For that to work, the SP must have the AcrPull role in the ACR. Failure to do so results in the GUI presenting a spinning icon for the Image and Tag rows. I stumbled on it before - How to configure an Azure app service to pull images from an ACR with terraform? Now the answer to that question suggests that I have to assign the AcrPull role and set the DOCKER_xyz app settings. I think that the DOCKER_xyz app settings are not for that, but for the second part.
It seems to me that when an App Service is started, the host uses docker to actually pull the right image from the ACR. This part seems to be detached from (1). For it to work, the app settings must have the DOCKER_xyz app settings.
My problem is that part (1) works great, but part (2) does not even if DOCKER_xyz app settings specify the credentials of the SP from part (1). The only way I could make it work if I point DOCKER_xyz at the Admin User of the ACR.
But that why on Earth the DOCKER_xyz app settings cannot point to the pipeline SP, which was good enough for the part (1)?
EDIT 6
The current state of affairs is this. Azure App Service is unable to communicate with an ACR except using ACR admin user and password. So, even if the docker runtime running on the App Service host machine may know how to login using any service principal, the App Service would not use any identity or Service Principal to read metadata from the ACR - only admin user and password. The relevant references are:
https://feedback.azure.com/forums/169385-web-apps/suggestions/36145444-web-app-for-containers-acr-access-requires-admin#%7btoggle_previous_statuses%7d
https://github.com/MicrosoftDocs/azure-docs/issues/49186
On a personal note I find it amazing that Microsoft recommends not to use ACR admin user, yet a very core piece of their offering, namely Azure App Service, depends on it being enable. Makes me wonder whether different teams in Microsoft are aware of what others are doing or not doing...
App service started pulling after doing these steps for me. :D
Enable Admin Access in Azure Container Registry
In the App service configuration, provide container registry admin credentials
DOCKER_REGISTRY_SERVER_PASSWORD(admin enabled password),
DOCKER_REGISTRY_SERVER_USERNAME(crxxxxxx),
DOCKER_REGISTRY_SERVER_URL (https://crxxxxxx.azurecr.io)
Go to your app service and select identity section on the left, and click on system assigned - change status to On.
Now go to IAM Control container registry, add ACR pull role to App Service system assigned identity enabled on step 3.
Restart your App Service and wait .Changes will take few minutes to reflect so refresh your logs. (10 minutes or more)
Good luck :)
After a lot of research I figured out a way to resolve this without enabling Admin user
Create an app registration using Azure Active Directory and store the secret somewhere.
Go to the Azure container registry and add role assignment to this newly created app with permissions of AcrPush (which also contains AcrPull).
In the App service configuration, replace the variables .
DOCKER_REGISTRY_SERVER_PASSWORD with Client Secret of app registration which was saved in the first step
DOCKER_REGISTRY_SERVER_USERNAME with client Id of App registration
This should solve the Docker Api exception.
It's baffling that this is not mentioned in any Azure Container Registry documentation. Although I think it is mentioned somewhere in AAD documentation indirectly 😐.
From the message I got of the talk, let me solve your puzzle about the error.
I guess you deploy the image in ACR to the Web App through the Azure portal. When you use the Azure portal to deploy the Web App from the ACR, it only lets you select the ACR and image and tag, but do not let you set the credential. In this way, Azure will set it itself with the admin user and password if you enable the admin user. If you do not enable it, the error you got happens.
And if you want to use the service principal, I recommend you use the other tools, such as Azure CLI. Then you can set the docker registry credential yourself with the command az webapp config container set.
Here is the example and it works fine on my side:
With the Azure CLI, you can follow the steps here.
Update:
Here are the screenshots of the test on my side:
Found the answer by setting "acrUseManagedIdentityCreds" to True. The second command in this comment: https://stackoverflow.com/a/69120462/17430834
Edit 1: Adding the command
Here is the command that you will need to run to make this change.
az resource update --ids /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Web/sites/<app-name>/config/web --set properties.acrUseManagedIdentityCreds=True
I was trying to do the same from Azure DevOps pipelines and got the same problem.
I didn't find out how to make it work using the ACR name, but it works if you use your_acr_name.azurecr.io instead.
If you go to the Access Keys page of your ACR you will find two values
Registry name: MyCoolRegistry (doesn't work if you use this one)
Login server: mycoolregistry.azurecr.io
The login server is working - just put it as the containerRegistry in your Pipeline without creating a service connection.
Just in case someone is struggling with that one.
Just to add to mark's amazing job of working it all through and for the fast readers: for everything to work, one of course also has to enable the admin user (who by default is disabled). For example by issuing:
az acr update -n <your-azureregistry-name> --admin-enabled true
on the console.
I experienced this same issue when trying to deploy an Docker application to Azure Web Apps for containers.
When I deployed the application I will get the error:
DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for a..my-repo/image, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}.
Here's how I solved it:
The issue was that I was not specifying the full path to the image. I was supposed to include my-registry-url in the docker image-name. That is instead of just image-name I was supposed to use my-registry-url/image-name, since I am trying to pull from a private repository.
So say these are variables:
docker image name is promiseapp
docker-registry_url is promisecicdregistry.azurecr.io
resource-group is dockerprojects
app-service-plan is dockerlinuxprojects
azure-web-app name is promiseapptest
docker-registry-user is test-user
docker-registry-password is 12345678
Then my command will be:
az webapp create --resource-group dockerprojects --plan dockerlinuxprojects --name promiseapptest --deployment-container-image-name promisecicdregistry.azurecr.io/promiseapp
az webapp config container set --resource-group dockerprojects --name promiseapptest --docker-custom-image-name promisecicdregistry.azurecr.io/promiseapp --docker-registry-server-url https://promisecicdregistry.azurecr.io --docker-registry-server-user test-user --docker-registry-server-password 12345678
In my case, I fixed the error by using the fully qualified Azure Container Registery name like this:
xwezi.azurecr.io
The previous value was
xwezi
When I deploy manually to App Services, I wouldn't get that error.
But, when I used Azure App Service deploy task to deploy the container to the App Service, the service won't work correctly.
And, the log stream will show the above errors.
Unfortunately, the error messages weren't helpful for me to find this out. But I hope this will save your time :)

Azure deployment The value of deployment parameter 'dockerRegistryUrl' is null

Trying to deploy a web app using docker-compose and azure container registry and some public images but when I get to the review it gives me this error.
The value of deployment parameter 'dockerRegistryUrl' is null. Please specify the value or use the parameter reference. See https://aka.ms/resource-manager-parameter-files for details.
here is how I'm linking the azure container registry
image: csym023.azurecr.io/csym023_api:latest
...
image: csym023.azurecr.io/csym023_app:latest
think I may have set up the docker-compose file incorrectly for the azure container registry but I am not sure. the documentation link isn't very clear to me it doesn't say anything about the 'dockerRegistryUrl' or where to upload the resource manager parameter file.
here is the Docker compose file
For your issue, actually, the "dockerRegistryUrl" is not a property in the docker-compose file, it's an environment variable of the Azure Web App for Container if you use the template.
So if you use the ACR for you images, you need to set the environment variables DOCKER-REGISTRY-SERVER-UTL, DOCKER-REGISTRY-SERVER-PASSWORD and DOCKER-REGISTRY-SERVER-USERNAME in the app settings. Also, WEBSITES_ENABLE_APP_SERVICE_STORAGE is necessary.
In addition, you need to meet the Docker compose options which supported in Azure. And you can the details here.

Unable to configure cloud provider (azure) with OpenShift Origin

I want to add a cloud provider (Azure)for my persistent volume storage (Azure File).
I have added Cloud Providers details in Inventory and run the prerequisites.yml from OpenShift-ansible and also run deploy_cluster.yml.
installation get done successfully and cloud provider details get automatically added into node-config.yml
but missing in master-config.yml.
if I add details manually in master-config.yml then it is giving me an error
i.e all the running images of dockers getting down.
and if I put master-config.yml without cloud provider details then it is working properly.
but failed to configure cloud-provider with OpenShift.
The link which I have followed
https://docs.openshift.com/container-platform/3.9/install_config/configuring_azure.html
Automatically generated
kubeletArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
Error after adding
kubernetesMasterConfig:
...
apiServerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
controllerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
Version
oc v3.9.0+71543b2-33
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Current Result
Not Configured
Expected Result
Cloud Provider(azure) should get configured with OpenShift

Azure DevOps project - Service Fabric deploy - sample failing

I have created a new Azure DevOps project. Asp.Net core 2.1, Service Fabric deploy.
First deploy went fine. Without any changes subsequent releases are failing
warnings and error
2018-10-10T08:24:17.8368242Z ##[section]Starting: Deploy Service Fabric Application
2018-10-10T08:24:17.8375072Z ==============================================================================
2018-10-10T08:24:17.8375163Z Task : Service Fabric Application Deployment
2018-10-10T08:24:17.8375234Z Description : Deploy a Service Fabric application to a cluster.
2018-10-10T08:24:17.8375288Z Version : 1.7.22
2018-10-10T08:24:17.8375356Z Author : Microsoft Corporation
2018-10-10T08:24:17.8375410Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=820528)
2018-10-10T08:24:17.8375479Z ==============================================================================
2018-10-10T08:24:20.0073284Z Searching for path: D:\a\r1\a\**\drop\projectartifacts\**\PublishProfiles\Cloud.xml
2018-10-10T08:24:20.2879096Z Found path: D:\a\r1\a\Drop\drop\projectartifacts\Application\Voting\PublishProfiles\Cloud.xml
2018-10-10T08:24:20.3657104Z Searching for path: D:\a\r1\a\**\drop\applicationpackage
2018-10-10T08:24:20.4618957Z Found path: D:\a\r1\a\Drop\drop\applicationpackage
2018-10-10T08:24:20.7317155Z Imported cluster client certificate with thumbprint '25826D862588CBFA3D2113D882255156F7233F44'.
2018-10-10T08:25:02.0637557Z ##[warning]Failed to contact Naming Service. Attempting to contact Failover Manager Service...
2018-10-10T08:25:42.0730582Z ##[warning]Failed to contact Failover Manager Service, Attempting to contact FMM...
2018-10-10T08:26:22.0962942Z ##[warning]No such host is known
2018-10-10T08:26:22.2408731Z Service fabric SDK version: 3.2.176.9494.
2018-10-10T08:26:22.4279087Z ##[error]No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
2018-10-10T08:26:22.4687237Z ##[section]Finishing: Deploy Service Fabric Application
All other devops project releases are failing also for same reason.
Any help to debug appreciated
well, this clearly has nothing to do with the release if all the releases are failing. Something happened to your cluster or to your service endpoint
You would need to check if you can connect to the cluster endpoint manually with powershell, for example (connect-servicefabricluster or something along those lines).
misunderstanding of built in release task.
guess cluster was created by DevOps project create and not release task as I thought.

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Resources