Defining command & argument - Pod yaml - linux

As per the syntax mentioned, below is the Pod yaml using args:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
resources:
limits:
memory: "64Mi" #64 MB
cpu: "50m" #50 millicpu (.05 cpu or 5% of the cpu)
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Documentation says: If you define args, but do not define a command, the default command is used with your new arguments.
As per the documentation, What is the default command for the arguments(in above yaml)?

"Default command" references the command set in your container image. In case of your image - k8s.gcr.io/busybox - this appears to be /bin/sh:
$ docker pull k8s.gcr.io/busybox
Using default tag: latest
latest: Pulling from busybox
a3ed95caeb02: Pull complete
138cfc514ce4: Pull complete
Digest: sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67
Status: Downloaded newer image for k8s.gcr.io/busybox:latest
k8s.gcr.io/busybox:latest
$ docker image inspect k8s.gcr.io/busybox | jq '.[0] | .Config.Cmd'
[
"/bin/sh"
]
So, by explicitly setting a pod.spec.containers.command, you are effectively overriding that value.
See also:
$ kubectl explain pod.spec.containers.command
KIND: Pod
VERSION: v1
FIELD: command <[]string>
DESCRIPTION:
Entrypoint array. Not executed within a shell. The docker image's
ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)
are expanded using the container's environment. If a variable cannot be
resolved, the reference in the input string will be unchanged. The
$(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
Escaped references will never be expanded, regardless of whether the
variable exists or not. Cannot be updated. More info:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
Read more here.

To visualize this, you can run the following command:
kubectl run busybox-default --image busybox
pod/busybox-default created
kubectl run busybox-command --image busybox --command sleep 10000
pod/busybox-command created
check the output of docker ps and look for COMMAND column. You may use --no-trunc flag for complete output.
Output for the container running without command option:
docker ps |grep -E 'busybox-default'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e01428c98071 k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_busybox-default_default_3449d2bc-a731-4441-9d78-648a7fa730dd_0
Output for the container running with command option:
docker ps |grep -E 'busybox-command'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
557578fc60ea busybox "sleep 10000" 5 minutes ago Up 5 minutes k8s_busybox-comand_busybox-command_default_57f6d09c-2ed1-4b73-b3f9-c2b612c19a16_0
7c6f1240ab07 k8s.gcr.io/pause:3.2 "/pause" 5 minutes ago Up 5 minutes k8s_POD_busybox-comand_default_57f6d09c-2ed1-4b73-b3f9-c2b612c19a16_0

Related

how to check my mariadb docker image is started in Azure pipeline?

In Azure pipeline i pull and start a docker image of a maria DB:
- bash: |
docker pull <some_azure_repository>/databasedump:8878
echo "docker image pulled"
docker run -d --publish 3306:3306 <some_azure_repository>/databasedump:8878
I would like to make sure that the docker image is successfully started, before continuing with the next steps.
That is why i add this step:
- powershell: |
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all')) {
еcho "Will have some sleep" //Should be "Start-Sleep -Seconds 15"
}
But when this is executed in an Azure pipeline, the pipeline get stuck into this execution:
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
The dump
еcho "Will have some sleep"
is never reached. Even if i change the condition from negative to positive:
if ((mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
the result is the same !?!?!
So several questions:
1. How to check correctly whether the mariadb docker container is running?
2. Why the execution is stuck into this line?
3. How to do this with a while loop (if the check is not successful to wait for 15 seconds, then to try another check, and so on....)

How to write loops in gitlab-ci.yaml file?

I am writing a GitLab-ci.yaml file for my application and in that I have to run the kubectl command to get all pods after I ran the command I got the names of the pods which I need but ow the thing is I have to run kubectl cp command and need to copy a file into all three pods but I don't know the way to perform this action.
If anyone knows the way to do this activity then please reply.
Thanks
job:
script:
- |
for pod in $(kubectl get po -o jsonpath='{.items[*].metadata.name} -n your-namespace')
do
kubectl cp $(pwd)/some-file.txt your-namespace/$pod:/
done
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods

Bash - Exiting script file not child bash command | Exit command [duplicate]

my question is simple.
How to execute a bash command in the pod? I want to do everything with one bash command?
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod --bash -c "mongo"
Error: unknown flag: --bash
So, the command is simply ignored.
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod bash -c "mongo"
root#mongo-deployment-78c87cb84-jkgxx:/#
Or so.
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod bash mongo
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-deployment-78c87cb84-jkgxx -n tools' to see all of the containers in this pod.
/usr/bin/mongo: /usr/bin/mongo: cannot execute binary file
command terminated with exit code 126
If it's just a bash, it certainly works. But I want to jump into the mongo shell immediatelly.
I found a solution, but it does not work. Tell me if this is possible now?
Executing multiple commands( or from a shell script) in a kubernetes pod
Thanks.
The double dash symbol "--" is used to separate the command you want to run inside the container from the kubectl arguments.
So the correct way is:
kubectl exec -it --namespace=tools mongo-pod -- bash -c "mongo"
You forgot a space between "--" and "bash".
To execute multiple commands you may want:
to create a script and mount it as a volume in your pod and execute it
to launch a side container with the script and run it
I use something like this to get into the pod's shell:
kubectl exec -it --namespace develop pod-name bash
then you can execute the command you want within the pod (e.g. ping)
ping www.google.com
then you can see your ping log and voila ... enjoy it :D

Deploy an empty pod that sits idle in k8's NameSpace

Can anyone help in making me understand if it's possible to deploy an empty pod inside a node in k8's for basic network debugging.PS : I should be able to exec this pod after its deployed.
You can just use kubectl with generator.
# Create an idle pod
$ kubectl run --generator=run-pod/v1 idle-pod -i --tty --image ubuntu -- bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Exec into idle pod
$ kubectl exec -i --tty idle-pod bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Delete the idle pod
$ kubectl delete pod idle-pod
pod "idle-pod" deleted
$
Just deploy a pod with a container you need and command which do nothing.
Save that spec to a yaml file:
apiVersion: v1
kind: Pod
metadata:
name: empty
spec:
containers:
- name: empty
image: alpine
command: ["cat"]
And then apply that yaml by kubectl apply -f $filename

How can I use the internal ip address of a container as an environment variable in Docker

I'm trying to get the IP address of my docker container as an environment variable within the container. Here is what I've tried:
When starting the container
docker run -dPi -e ip=`hostname -i` myDockerImage
When the container is already booted up
docker exec -it myDockerImage bash -c "export ip=`hostname -i`"
The problem with these two methods is that it uses the ip address of the host running the commands, not the docker container it's being run on.
So then I created a script inside the docker container that looks like this:
#!/bin/bash
export ip=`hostname -i`
echo $ip
And then run this with
docker exec -it myDockerImage bash -c ". ipVariableScript.sh"
When I add my_cmd which in my case is bash to the end of the script, it works in that one session of bash. I can't use it later in the files I need it in. I need to set it as an environment variable, not as a variable for one session.
So I already sourced it with the '.'. But it still won't echo when I'm in the container. If I put an echo $ip in the script, it will give me the correct IP address. But can only be used from within the script it's being set in.
Service names in Docker are more reliable and easier to use. However, here's
How to assign Docker guest IP to environment var inside guest
$ docker run -it ubuntu bash -c 'IP=$(hostname -i); echo ip=$IP'
ip=172.17.0.76
So, this is an old question but I ended up with the same question yesterday and my solution is this: use the docker internal option.
My containers were working fine but at some point the ip changed and I needed to change it on my docker-compose. Of course I can use the "docker network inspect my-container_default" and get my internal IP from that, but this also means changing my docker-compose every time the ip changes (and I'm still not that familiar with docker in order to detect IP changes automatically or make a more sofisticated config). So, I use the "host.docker.internal" flag. Now I no more need to check what's my IP from docker and everything is always connected.
Here an example of a node app which uses elastic search and needs to connect.
version: '3.7'
services:
api:
...configs...
depends_on:
- 'elasticsearch'
volumes:
- ./:/usr/local/api
ports:
- '3000:80'
links:
- elasticsearch:els
environment:
- PORT=80
- ELASTIC_NODE=http://host.docker.internal:9200
elasticsearch:
container_name: 'els'
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
...elastic search container configs...
ports:
- '9200:9200'
expose:
- 9200
networks:
- elastic
networks:
elastic:
driver: bridge
Note the "ELASTIC_NODE=http://host.docker.internal:9200" on api environments and the "network" that the elastic search container is using (on bridge mode)
This way you don't need to worry about knowing your IP.
The container name is postgres in this example. It is a bit clumsy, but it delivers.
container_ip=$(docker inspect postgres -f "{{json .NetworkSettings.Networks }}" \
| awk -v FS=: '{print $9}' \
| cut -f1 -d\, \
| echo "${container_ip//\"}")
Make a function out of it:
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -eu -o pipefail
#set -x
#trap read debug
#assign container ip address to variable
function get_container_ip () {
container_ip=$(docker inspect "$1" -f "{{json .NetworkSettings.Networks }}" \
| awk -v FS=: '{print $9}' \
| cut -f1 -d\,)
container_ip=$(echo "${container_ip//\"}")
}
get_container_ip $1
echo "$container_ip"

Resources