I'm trying to enable RBAC on my k8s cluster on azure. I ssh'ed into my master node and edited the kube-apiserver.yaml with the --authorization-mode=RBAC flag. Then I delete the kube-apiserver pod in order to restart the api server. However, when upon restart the --authorization-mode=RBAC config is ignored. Anybody have any advice?
Also the api server configuration is set to --v=10 and the image is v1.6.6
Deleting the pod is not enough. You need to restart kubelet in order for the new options to be applied.
systemctl restart kubelet
I was finally able to generate a cluster that would allow me to enable RBAC on azure by generating an arm template using Azure Container Service Engine:
https://github.com/Azure/acs-engine
By using the above library I could create a new arm template with RBAC enabled and then use the Azure CLI to create a RBAC, configurable Kubernetes cluster.
Related
I have setup a Kubernetes Cluster using aks-engine to deploy a kubernetes configuration enabling feature-gates(github aks-engine) in this case --feature-gates=WindowsGMSA=true.
The installation is successful, the cluster is created with 2 nodes(windows agent, linux master) now i am trying to domain join and create GSMA to enable Authentication.
I am unclear on the process as i am new in AKS and Azure/Active Directory. Any help would be appreciated.
I created an AD DS in Azure and tried to domain joined to the windows node and it is not working, maybe i am doing this wrong.
I'm trying to figure out the steps to setup CI/CD for an Asp.Net Core web application using AKS with VSTS. Are the steps described in https://learn.microsoft.com/en-us/vsts/build-release/apps/cd/azure/deploy-container-kubernetes valid for what I'm trying to do? Are windows container supported in AKS?
If your application is in ASP.Net Core, then you can host it in Linux as your code is platform independent. I have done this using Docker-file where your container is a self hosted app running on AKS.
VSTS provides a Inbuilt task to deploy to your AKS cluster in your build pipeline.
Windows support on k8s is better with Windows Server version 1709 which needs Kubernetes v1.9 (bleeding edge stable). See https://kubernetes.io/docs/getting-started-guides/windows/
Unfortunately, at this time, AKS preview only supports up to 1.8.2.
Frosty, if you can create a docker image out of your Windows machine, it can be pushed to the container registry and then deployed to Kubernetes cluster. Here are some links for reference:
Building and Pushing Windows container images: https://blog.docker.com/2016/09/build-your-first-docker-windows-server-container/
Install Azure CLI: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
Create Kubernetes cluster in AKS: https://coderise.io/kubernetes-cluster-on-azure-container-service/
Windows containers are in private preview in AKS (reference); you can sign up using this form.. You can run hybrid clusters (Linux+Windows, up to 1803) using acs-engine today.
The VSTS walkthrough you linked is valid; check also this one and this one.
Update: Windows support for AKS is still a work in progress.
Currently Windows container are only in private preview, and you need to enable it using Azure CLI do some steps, please refer this official docs: https://learn.microsoft.com/en-us/azure/aks/windows-container-cli. After you enable it, then you can check the 'Windows Container' option when you create node pool in your azure kubernete service account.
I can bind an application to PCF Autoscaler on deployment by using the manifest.yml. The application shows up in the Autoscaler UI, but now has to be manually enabled and configured.
How can we set up our deployment so that the application is automatically enabled and configured in PCF Autoscaler? Can we set some values in the manifest.yml to control this? Or is there a way to configure this through the PCF command line?
Install the autoscaling-cli-plugin on your build machine.
And you can then use a scripting thru your pipeline to apply settings, post deploy of the app.
Note: Just checked the plugin page. The prebuilt release only supports autoscaling by cpu. It needs to be updated to allow autoscaling by http requests etc.
I would like to provide some pre-defined scripts which need to be run in Azure Container Service Swarm cluster VMs. Is there a way I can provide these pre-defined scripts to ACS, so that when the cluster is deployed, this pre-defined scripts automatically gets executed in all the VMs of that ACS cluster?
This is not a supported feature of ACS, though you can run arbitrary scripts on the VMs once created. Another option is to use ACS Engine which allows much more control over the configuration of cluster as it outputs templates that you can configure. See http://github.com/Azure/acs-engine.
I just tried Docker 4 Azure Beta, but I'm confused right at the beginning.
When I SSH into docker manager VM and do docker info or docker service ls I see that there is no swarm enabled?
Why? What's going on here? Do I have to enable it by myself? But as I understand, I have no option to SSH into my worker nodes, so how would I do this?
Or is there some "azure-clustering" configured instead of Docker Swarm? But why would they do so - and how can I manage my cluster/swarm in that case (scaling container count up and down)?
After you login to one of the manager VMs, you should be able to execute docker node ls and see all the VMs brought up through the ARM template as part of the swarm. The swarm should have been configured for you as part of the VM bring-up. Did you correctly generate an Azure Service Principal and pass in the credential details to the ARM template?
Old school Docker clustering (swarm) is supported out of the box.
Swarm Mode is available in the UK region at the moment.
I was wondering why running a custom template gives me the option to select Swarm Mode and also asks if I am in the UK, then I found this!
I'll give it a try to check what happens if I select swarm mode but I am quite certain the deployment will fail since my account is in Greece.