[Question posted by a user on YugabyteDB Community Slack]
Can we change the timezone at the server level. Suppose I want to deploy yugabytedb cluster by helm chart, then how can I set the default timezone for the cluster ?
There is no such override right now. You may be able to achieve this by modifying the helm charts to add this TZ environment variable to the pod:
https://www.ibm.com/support/pages/how-do-you-change-timezone-pods
kubernetes timezone in POD with command and argument
Related
I have a spark application and want to deploy this on a Kubernetes cluster.
Following the below documentation I have managed to create an empty Kubernetes cluster, generated docker image using the Dockerfile provided under kubernetes/dockerfiles/spark/Dockerfile and deployed this on the cluster using spark-submit in a Dev environment.
https://spark.apache.org/docs/latest/running-on-kubernetes.html
However, in a 'proper' environment we have a managed Kubernetes cluster (bespoke unlike EKS etc.) and will have to provide pod configuration files to get deployed.
I believe you can supply Pod template file as an argument to the spark-submit command.
https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template
How can I do this without spark-submit? And are there any example yaml files?
PS: we have limited access to this cluster, e.g. we can install Helm charts but not operator or controller.
You could try to use k8s Spark CRD https://github.com/GoogleCloudPlatform/spark-on-k8s-operator and provide a pod configuration through it.
I turned on Unity Catalog for our workspace. Now a job cluster has an access mode setting. (docs) I can manually change this setting on the UI:
But how do I control this setting when creating the job through databricks jobs create --json-file X.json?
You need to specify the data_security_mode with value "NONE" in the cluster definition (for some reason it's missing from API docs, but you can find details in the Terraform provider docs). But really it should be the default value, so you don't need to explicitly specify it.
I am trying to deploy the Consul client on a k8s cluster ( the Consul server is on a docker swarm cluster). I want to use a config.yaml (mentioned in https://www.consul.io/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes) to set up the configuration. I found a Helm Chart Configuration page (https://www.consul.io/docs/k8s/helm#client) and a Configuration page (https://www.consul.io/docs/agent/options#config_key_reference). What is the difference between them? It seems that I need to refer to the Helm Chart Configuration page since I am working on the k8s. However, on the Helm Chart Configuration page, I can not find how to set up something like node_name, data_dir, client_addr, bind_addr, advertise_addr. Besides, I also need to set verify_incoming, encrypt, verify_outgoing, verify_server_hostname. For ca_file, cert_file, and key_file, I assume that cert_file is for caCert (in Helm Chart Configuration), key_file is for caKey, and I am not sure what the ca_file stand for.
k8s version:
Server Version:v1.22.4
There ate four servers in the cluster.
Any help would be appreciated.
Thanks
I can not find how to set up something like node_name, data_dir, client_addr, bind_addr, advertise_addr. Besides, I also need to set verify_incoming, encrypt, verify_outgoing, verify_server_hostname. For ca_file, cert_file, and key_file, I assume that cert_file is for caCert
Most of these parameters are specified by the Helm chart when deploying the Consul client or server pods. Is there a particular reason that you need to override these particular settings?
With that said, you can use server.extraConfig and client.extraConfig to provide additional configuration parameters to the server and client agents. Normally you would only specify parameters that are not already specified by deployments created by the Helm chart.
I use the terraform module, terraform-aws-modules/rds/aws (version: 2.20.0) provisioned MariaDB master and a replica. I would like to promote the replica to be a standalone DB instance. The document at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html gives instruction of how to do it via AWS console. I would like to do it use terraform script. Anyone has idea of how to promote a replica to be a standalone DB instance using terraform script? My terraform version is v01.3.5.
I a guessing you have the read replica resource via terraform.
From docs:
Removing the replicate_source_db attribute from an existing RDS
Replicate database managed by Terraform will promote the database to a
fully standalone database.
You can make a condition there to switch it on and off.
I Have an AKS (Azure Container Service) configured, up and running, with kubernetes installed.
Deploying containers on using [kubectl proxy] and the GUI of Kubernetes provided.
I am trying to increase the log level of the pods in order to get more information for better debugging.
I read a lot about kubectl config set
and the log level --v=0 [0-10]
but not being able to change the log level. it seems the documentation
can someone point me out in the right direction?
The --v flag is an argument to kubectl and specifies the verbosity of the kubectl output. It has nothing to do with the log levels of the application running inside your Pods.
To get the logs from your Pods, you can run kubectl logs <pod>, or read /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ on the Kubernetes node.
To increase the log level of your application, your application has to support it. And like #Jose Armesto said above, this is usually configured using an environment variable.