release-channel attribute in terraform for GKE cluster creation - terraform-provider-gcp

https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels
offers to specify a release-channel on cluster creation for automatic upgrades of the cluster.
gcloud alpha container clusters create [CLUSTER-NAME] \
--zone [ZONE] \
[ADDITIONAL-FLAGS] \
--release-channel rapid
It seems not possible with Terraform.
It would be nice to have this feature in terraform too, right?

I believe the release-channel feature is still in beta
It'd be worth raising in https://github.com/terraform-providers/terraform-provider-google-beta (a variant of the Google provider, which "is now necessary to be able to configure products and features that are in beta", according to https://www.terraform.io/docs/providers/google/version_2_upgrade.html#the-google-beta-provider)

When I need a resource that hasn't made it to the google provider yet I usually just create a wrapper module that calls the gcloud command. Here's an example:
https://github.com/rojopolis/terraform-google-filestore

Related

AKS nodepool in a failed state, PODS all pending

yesterday I was using kubectl in my command line and was getting this message after trying any command. Everything was working fine the previous day and I had not touched anything in my AKS.
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2022-01-11T12:57:51-05:00 is after 2022-01-11T13:09:11Z
After doing some google to solve this issue I found a guide about rotating certificates:
https://learn.microsoft.com/en-us/azure/aks/certificate-rotation
After following the rotate guide it fixed my certificate issue however all my pods were still in a pending state so I then followed this guide: https://learn.microsoft.com/en-us/azure/aks/update-credentials
Then one of my nodepools started working again which is of type user but the one of type system is still in a failed state with all pods pending.
I am not sure of the next steps I should be taking to solve this issue. Does anyone have any recommendations? I was going to delete the nodepool and make a new one but I can't do that either because it is the last system node pool.
Assuming you are using API version older than 2020-03-01 for creating AKS cluster.
There are few limitations apply when you create and manage AKS clusters that support system node pools.
• An API version of 2020-03-01 or greater must be used to set a node
pool mode. Clusters created on API versions older than 2020-03-01
contain only user node pools, but can be migrated to contain system
node pools by following update pool mode steps.
• The mode of a node pool is a required property and must be
explicitly set when using ARM templates or direct API calls.
You can use the Bicep/JSON code provided in MS Document to create the AKS cluster as there is using upgaded API version.
You can also follow this MS Document if you want to Create a new AKS cluster with a system node pool and add a dedicated system node pool to the existing AKS cluster.
The following command adds a dedicated node pool of mode type system with a default count of three nodes.
az aks nodepool add \
--resource-group myResourceGroup \
--cluster-name myAKSCluster \
--name systempool \
--node-count 3 \
--node-taints CriticalAddonsOnly=true:NoSchedule \
--mode System

connect AAD to existing AKS that has

Working with Azure, we started with AKS last year. On creation of the AKS clusters we use, we checked what needed to be done up front to enable rbac at a later moment and we then thought that setting 'rbac' to 'enabled' was the only thing we needed. This results in the following:
Now we're trying to implement rbac integration of AKS with AAD, but I read some seemingly conflicting pre-requisites. Some say that in order to integrate AAD and AKS, you need rbac enabled at cluster creation. I believe we have set that correct, looking at the picture above.
But then in the Azure docs, it is mentioned that you need to create a cluster and add some AAD-integration keys for the client and server applications.
My question is actually two-fold:
when people say you need rbac enabled in your aks cluster during creation do they actually mean you should select the 'rbac:enabled' box AND make sure you create the AAD-related applications up front and also configure these during cluster creation?
Is there a way to setup the AKS-AAD rbac connection on a cluster that has rbac:enabled but misses the aadProfile configuration?
I believe we indeed need to re-create all our clusters, but I want to know for sure by asking here as it's not 100% clear to me from what I've read online (also here at stack exchange) and it's going to be an awful lot of work.
For all of your requirements, you only need to make sure the RBAC enabled for your AKS cluster and it only can enable in the creation time. Then you can update the credential of the existing AKS AAD profile like this:
Before update:
CLI update command:
az aks update-credentials -g yourResourceGroup -n yourAKSCluster --reset-aad --aad-server-app-id appId --aad-server-app-secret appSecret --aad-client-app-id clientId --aad-tenant-id tenantId
After update:
yes, that is correct
no, there is no way of doing that. you need to recreate.

alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on terraform

So on Alibaba Cloud module in terraform and found identic resource:
alicloud_cs_managed_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
alicloud_cs_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_kubernetes.html
what is the different of that? i cannot differentiate that two resource
biggest difference is,
Managed kubernetes cluster, that means you can't control kubernetes master.
kubernetes cluster, you need create master as well.
master_instance_types = ["ecs.n4.small"]
Specifically speaking the differences between alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on Terraform are can be addressed detailly with the help of parameter reference provided on official documentation.
But, the major difference between Kubernetes and Managed Kubernetes is
You don't need to manage the master node in Managed Kubernetes

How can I kick off a container instance using the Azure api?

I have a container building in gitlab and registering itself with the gitlab custom registry. Inside this container is a command that runs a very long time. I would like to somehow deploy this container to azure, and only kick off this long running process inside a new container instance on demand from an administrative api service. I don't want the container running all the time, only for the time it takes to run the command.
I was thinking that this admin api could be a classic http rest api service hosted under Azure "App Services", or possibly using the new "Function Apps" feature of Azure.
In my research, I found that using the azure cli commands, I can start a container like so:
az container create \
--resource-group myResourceGroup \
--name mycontainer2 \
--image microsoft/aci-wordcount:latest \
--restart-policy OnFailure \
--environment-variables NumWords=5 MinLength=8
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables
I would like to do this from the admin api, preferably using what looks to me like the official Azure npm package located here:
https://www.npmjs.com/package/azure
Ideally, it would be a single command to create and start the instance, being able to set the environment variables like this example at the launch of the container is important to me. I'm not interested in moving all my code over into Azure, I would like to continue using gitlab for the source code and container registry but if there is some reason I have to switch to using the Azure container registry, I need a way to somehow move the container registration over there using the gitlab ci yaml.
In all my searching, I couldn't find any way to do this but the closest documentation I found was here:
https://learn.microsoft.com/en-us/javascript/api/azure-arm-containerservice/containerserviceclient?view=azure-node-latest
At the current time there is no way to officially do this from the api, maybe in the future there will be

Specifying Kubernetes version for Azure Container Service

Does anyone know if it is possible to specify the Kubernetes version when deploying ACS Kubernetes flavour?
If so how?
Using the supported resource provider in ARM you cannot specify the version. However, if you use http://github.com/Azure/acs-engine you can do so. ACS Engine is the open source code we (I work for MS) use to drive Azure Container Service. Using this code you have much more flexibility than you do through the published resource provider, but it's a harder onramp. For instructions see https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md
See examples at https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-releases
You should use acs-engine and follow the deploy guide in the repo (https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md).
In the deploy guide they use the file examples/kubernetes.json and in that file there's -
"orchestratorProfile": {
"orchestratorType": "Kubernetes"
}
You can also add the field "orchestratorRelease": "1.7" for Kubernetes 1.7.
To view the whole list of releases available you can use the acs-engine executable and run acs-engine orchestrators that prints all of them.
Other examples can be found in https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-releases

Resources