[Question posted by a user on YugabyteDB Community Slack]
In this doc: https://docs.yugabyte.com/preview/deploy/manual-deployment/system-config/#ntp, it mentions:
If you're using systemd to start the processes…
If YugabyteDB is deployed on k8s with helm chart, under the hood, is yb-master / yb-follower started using systemd?
Second question:
the above instruction is located under "Manual Deployment" section of the doc. In the k8s deployment section: https://docs.yugabyte.com/preview/deploy/kubernetes/single-zone/oss/yugabyte-operator/ there are no instructions on "system configuration".
I am wondering, we still want to configure each node for "system configuration", the helm chart deployment won't set those ulimits for us, right?
Most containers don't have systemd, the entrypoint yb-master/yb-tserver processes are started directly by the container runtime.
Good point that the node preparation should ideally set the ulimits if you are preparing your own kubernetes cluster.
Typically, ulimits on most cloud kubernetes clusters like GKE/EKS are set to reasonable values. Ulimits can generally not be set from within pods directly unless they are privileged pods.
Since yb containers don't have systemd, there is no need to configure /etc/systemd/system.conf.
Related
I am fiddling around with Kubernetes on a small managed cluster within AKS.
It looks like I'm ready to go with deploying as my node pools are already provisioned and bootstrapped (or that's what it looks like) upon setup.
Am I missing something here?
Do I really need kubeadm on a managed cloud cluster?
You DO NOT need kubeadm tool when using Azure AKS / AWS EKS / Google GKE managed Kubernetes clusters.
kubeadm is used to create a self-managed Kubernetes cluster.
You can use the kubeadm tool to create and manage Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.
I have a requirement to use Docker containers in PCF deployed in Azure.
And now we want to use kubernetes as container orchestration.
Does kubernetes can be used here ?
Or PCF will take care of the container orchasteration ?
Which one would be the better approach here ?
PKS Is Pivotal's answer to running kubernetes in PCF (regardless of IaaS)
Pivotal Cloud Foundry (PCF) is a sophisticated answer from Microsoft to current cloud expectations. PCF offers the best platform to run Microsoft based technology like .NET, and smoothly supports enterprise Java application. You can run Kubernetes there with fine results, but to achieve comfortable orchestration and management of containers I suggest reading about GKE or setting up your own Kubernetes cluster using kubespray utility.
I'm trying to enable RBAC on my k8s cluster on azure. I ssh'ed into my master node and edited the kube-apiserver.yaml with the --authorization-mode=RBAC flag. Then I delete the kube-apiserver pod in order to restart the api server. However, when upon restart the --authorization-mode=RBAC config is ignored. Anybody have any advice?
Also the api server configuration is set to --v=10 and the image is v1.6.6
Deleting the pod is not enough. You need to restart kubelet in order for the new options to be applied.
systemctl restart kubelet
I was finally able to generate a cluster that would allow me to enable RBAC on azure by generating an arm template using Azure Container Service Engine:
https://github.com/Azure/acs-engine
By using the above library I could create a new arm template with RBAC enabled and then use the Azure CLI to create a RBAC, configurable Kubernetes cluster.
I would like to provide some pre-defined scripts which need to be run in Azure Container Service Swarm cluster VMs. Is there a way I can provide these pre-defined scripts to ACS, so that when the cluster is deployed, this pre-defined scripts automatically gets executed in all the VMs of that ACS cluster?
This is not a supported feature of ACS, though you can run arbitrary scripts on the VMs once created. Another option is to use ACS Engine which allows much more control over the configuration of cluster as it outputs templates that you can configure. See http://github.com/Azure/acs-engine.
I just tried Docker 4 Azure Beta, but I'm confused right at the beginning.
When I SSH into docker manager VM and do docker info or docker service ls I see that there is no swarm enabled?
Why? What's going on here? Do I have to enable it by myself? But as I understand, I have no option to SSH into my worker nodes, so how would I do this?
Or is there some "azure-clustering" configured instead of Docker Swarm? But why would they do so - and how can I manage my cluster/swarm in that case (scaling container count up and down)?
After you login to one of the manager VMs, you should be able to execute docker node ls and see all the VMs brought up through the ARM template as part of the swarm. The swarm should have been configured for you as part of the VM bring-up. Did you correctly generate an Azure Service Principal and pass in the credential details to the ARM template?
Old school Docker clustering (swarm) is supported out of the box.
Swarm Mode is available in the UK region at the moment.
I was wondering why running a custom template gives me the option to select Swarm Mode and also asks if I am in the UK, then I found this!
I'll give it a try to check what happens if I select swarm mode but I am quite certain the deployment will fail since my account is in Greece.