I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.
When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.
See datadog helm-charts
# agents.volumes -- Specify additional volumes to mount in the dd-agent container
volumes:
- hostPath:
path: ./configs
name: openmetrics_config
# agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
volumeMounts:
- name: openmetrics_config
mountPath: /etc/datadog-agent/conf.d/openmetrics.d
readOnly: true
What I've tried
I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl rollout restart deployment datadog-cluster-agent -n datadog
This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.
But fear not.
There are few ways of doing it that and it all depending on your needs. They are:
Terraformed config
Terraformed mount
Terraformed copy config action
Terraformed config
To have config file terraformed means:
to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)
To achieve this create config map for every config:
resource "kubernetes_config_map" "config" {
metadata {
name = "some_name"
namespace = "some_namespace"
}
data = {
"config.conf" = file(var.path_to_config)
}
}
and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be
set {
name = "agents.volumeMounts"
value = [{
"mountPath": "/where/to/mount"
"name": kubernetes_config_map.config.metadata.0.name
}]
}
In example above I used single config and single volume for simplification, but for_each should be enough.
Terraformed mount
Another variant is that you don't want terraform to track configurations, then what you want to do is:
Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
Set set {...} like in previous section.
Terraformed copy config action
Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners
Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.
To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.
Related
I'm trying to copy pod's repository directly to azure storage account using a pipe.
Instead of doing these two commands :
kubectl cp my_pod:my_repository/ . -n my_namespace
azcopy cp my_repository/ "https://my-storage.blob.core.windows.net/?sp=r..." --recursive=true
I would like to do something like this using "--from-to" azcopy parameter :
kubectl cp my_pod:my_repository/ -n my_namespace | azcopy cp "https://my-storage.blob.core.windows.net/?sp=r..." --from-to PipeBlob --recursive=true
Not sure if it's possible. maybe with xargs ?
I Hope I'm clear enough.
I am writing a GitLab-ci.yaml file for my application and in that I have to run the kubectl command to get all pods after I ran the command I got the names of the pods which I need but ow the thing is I have to run kubectl cp command and need to copy a file into all three pods but I don't know the way to perform this action.
If anyone knows the way to do this activity then please reply.
Thanks
job:
script:
- |
for pod in $(kubectl get po -o jsonpath='{.items[*].metadata.name} -n your-namespace')
do
kubectl cp $(pwd)/some-file.txt your-namespace/$pod:/
done
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods
Take this scenario:
I want to delete every running pod automatically using the Commandline without having to type kubectl delete pod <pod_name> -n <namespace> for each pod.
You can use awk to filter pod names based on their STATUS==RUNNING. Below code will delete all(in Running state) the pods from $NAMESPACE namespace.
kubectl get pod -n $NAMESPACE|awk '$3=="Running"{print $1}'
Example:
for pod in $(kubectl get pod -n $NAMESPACE |awk '$3=="Running"{print $1}'); do
kubectl delete pod -n $NAMESPACE $pod
done
OR
You may use jsonpath,
NAMESPACE=mynamespace
for pod in $(kubectl get pod -n $NAMESPACE -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'); do
kubectl delete pod -n $NAMESPACE "$pod"
done
NOTE: Above code will cause deletion of all the pods in $NAMESPACE variable.
Example:
kubectl get pod -n mynamespace
NAME READY STATUS RESTARTS AGE
foo-mh6j7 0/1 Completed 0 5d3h
nginx 1/1 Running 2 7d10h
mongo 2/2 Running 12 57d
busybox 1/1 Running 187 61d
jsonpath query to print all pods in Running state:
kubectl get pod -n mynamespace -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'
nginx mongo busybox
Although, you have not asked for ready state, but following query can be used to list pods in ready state.
kubectl get pod -n mynamespace -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ "\n"}{end}'
foo-mh6j7
nginx
mongo
busybox
Similarly, this can be done via grep:
kubectl get pod -n $NAMESPACE |grep -P '\s+([1-9]+)\/\1\s+'
NOTE: Either of the solution will not prevent pods from getting respawned if they are created via replicaset or deployment or statefulset etc. This means, they will get deleted and respawned.
You could filter and delete running pods by:
kubectl delete pods -n <NAMESPACE> --field-selector=status.phase=Running
Here is a shellscript I made to achieve the task,
i=0 && for pod in $(kubectl get pods | grep 'Running')
do
if [ `expr $i % 5` == 0 ]
then kubectl delete pod $pod
fi
i=`expr $i + 1`
done
I found that when we loop over kubectl get pods | grep 'Running', every 5th word is a pod name.
So I basically wrote the script to take every 5th-word from the loop and execute whatever command I want on it.
Still, this feels like a naive approach. Feel free to share a better one.
To directly answer the question from the topic summary, for people who found this question, but want delete actually every pod, not only running ones.
Simply do:
kubectl delete pods -n <NAMESPACE> --all
I have a problem with my plan terraform in using cloud build. I cannot use gsutil command in a module terraform, I have an error :
Error: Error running command 'gsutil -m rsync -d -r ../../../sources/composer gs://toto/dags/': exit status 127. Output: /bin/sh: gsutil: not found
My cloudbuild.yaml :
steps:
- id: 'branch name'
name: 'alpine'
entrypoint: 'sh'
args:
- '-c'
- |
echo "***********************"
echo "$BRANCH_NAME"
echo "***********************"
...
# [START tf-apply]
- id: 'tf apply'
name: 'hashicorp/terraform:0.15.0'
entrypoint: 'sh'
args:
- '-c'
- |
if [ -d "terraform/environments/$BRANCH_NAME/" ]; then
cd terraform/environments/$BRANCH_NAME
terraform apply -auto-approve
else
echo "***************************** SKIPPING APPLYING *******************************"
echo "Branch '$BRANCH_NAME' does not represent an oficial environment."
echo "*******************************************************************************"
fi
# [END tf-apply]
timeout: 3600s
My module to put files in gcs :
resource "null_resource" "upload_folder_content" {
provisioner "local-exec" {
command = "gsutil -m rsync -d -r ${var.dag_folder_path} ${var.composer_dag_gcs}/"
}
}
As you are using the Hashicorp's Terraform image in your step, it is to be expected that gsutil it's not included by default and as such you're unable to run that command that your null_resource is defining opposed to what you'd be able to do on your local environment.
In order to overcome that, you could build your own custom image and push it to Google Container Registry so you're able to use it afterwards. With that option you will also have more flexibility as you could install whatever dependency your Terraform code has.
If you look at the actual error line, at the very end, it says this was the output of the command:
/bin/sh: gsutil: not found
I suspect that gsutil is simply not being found on your shell's path.
Perhaps you need to install whatever package gsutil is found in?
I don't have access to the namespace openebs and maya-apiserver. Can I run mayactl on my nodes to get the same information? If yes, how does mayactl know which PVCs/PVs I have access to? How does it protect other volumes from accidental deletion via mayactl volume delete?
You can do it from maya-apiserver pod. You can do it with the below command in the master node.
kubectl exec -it <pod name> -n openebs bash
Once you are inside the pod, you can run required mayactl command
Else you can run the command directly as per below format.
kubectl exec -it <pod name> -n openebs <required mayactl command>