Puppet Bolt multilevel inventory yamls - puppet

From the documentation of puppet bolt and their inventory.yaml, here, it seems you can define multiple levels of the yaml file by specifying another group in the definition of agroup. Thus creating a multilevel or nested inventory file.
However I can't find any examples of how to call the nested inventory files with the bolt command from cli.
For instance this yaml from the docmentation:
groups:
- name: ssh_nodes
groups:
- name: webservers
targets:
- 192.168.100.179
- 192.168.100.180
- 192.168.100.181
- name: memcached
targets:
- 192.168.101.50
- 192.168.101.60
config:
ssh:
user: root
config:
transport: ssh
ssh:
user: centos
private-key: ~/.ssh/id_rsa
host-key-check: false
How do I call from the ssh_nodes group the webservers group?
Normally I use something like this to call a top level group, which in this case the ssh_nodes group.
bolt plan run "deploy::update_package" \
--targets "ssh_nodes" \
--user "${BOLT_USER}" \
--private-key "${KEY}" \
--modulepath "path/to/module" \
--inventoryfile "${INVENTORY_FILE}" \
package_name="${PACKAGE}" \
package_version="${VERSION}"

Related

run docker inside docker container in AKS [duplicate]

We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **

How to setup one gitlab agent for all projects in gitlab group to deploy projects separately to the kuberenetes cluster

I have applied the gitlab agents separately to my kuberenetes cluster for each and every project inside the gitlab group by using helm command and separate namespaces to each project. As a example...
There are 2 projects inside my gitlab group.
1.mygroup/project1
2.mygroup/project2
And I used helm command like this...
For project 1 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace gitlab-agent-project-1 --create-namespace --set image.tag=v15.1.0 --set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
For project 2 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace
gitlab-agent-project-2 --create-namespace --set image.tag=v15.1.0
--set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
The only different between these two is namespace
So I am asking is this one is the best and correct way of doing this process... Cant we use one gitlab agent for all project inside the gitlab group and can't be use it for CICD kubernetes deployments separately??
Because there are pods initializing when I have applied separate agents for each one.If I have 100 projects and I have to provide 100 Pods IP addresses for those agents.
Yes, you can use one GitLab agent for all projects inside a GitLab group. Currently, I'm implementing this.
Tree project demonstrating:
Inside a GitLab agent project, you define .gitlab/agents/{agent-name}/config.yaml file.
Inside the config.yaml file, you set the ci_access to projects inside your GitLab group:
gitops:
# Manifest projects are watched by the agent. Whenever a project changes,
# GitLab deploys the changes using the agent.
manifest_projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent
default_namespace: gitlab-agent
ci_access:
projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/sample-go-service
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/api-test
From the project that needs access to the GitLab agent, you need to use-context in order to access gitlab-agent, then you can do the certain action you want. For example, this code from gitlab-ci.yml file in one project:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent:dev-agent-1
- kubectl apply -f functional-tester.yaml --namespace vinlab-testing

Accessing Kubernetes worker node labels from the Containers/pods

How to access Kubernetes worker node labels from the container/pod running in the cluster?
Labels are set on the worker node as the yaml output of this kubectl command launched against this Azure AKS worker node shows :
$ kubectl get nodes aks-agentpool-39829229-vmss000000 -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2021-10-15T16:09:20Z"
labels:
agentpool: agentpool
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_DS2_v2
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: eastus
failure-domain.beta.kubernetes.io/zone: eastus-1
kubernetes.azure.com/agentpool: agentpool
kubernetes.azure.com/cluster: xxxx
kubernetes.azure.com/mode: system
kubernetes.azure.com/node-image-version: AKSUbuntu-1804gen2containerd-2021.10.02
kubernetes.azure.com/os-sku: Ubuntu
kubernetes.azure.com/role: agent
kubernetes.azure.com/storageprofile: managed
kubernetes.azure.com/storagetier: Premium_LRS
kubernetes.io/arch: amd64
kubernetes.io/hostname: aks-agentpool-39829229-vmss000000
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
node.kubernetes.io/instance-type: Standard_DS2_v2
storageprofile: managed
storagetier: Premium_LRS
topology.kubernetes.io/region: eastus
topology.kubernetes.io/zone: eastus-1
name: aks-agentpool-39829229-vmss000000
resourceVersion: "233717"
selfLink: /api/v1/nodes/aks-agentpool-39829229-vmss000000
uid: 0241eb22-4d1b-4d65-870f-fcc51dac1c70
Note: The pod/Container that I have is running with non-root access and it doesn't have a privileged user.
Is there a way to access these labels from the worker node itself ?
In the AKS cluster,
Create a namespace like:
kubectl create ns get-labels
Create a Service Account in the namespace like:
kubectl create sa get-labels -n get-labels
Create a Clusterrole like:
kubectl create clusterrole get-labels-clusterrole --resource=nodes --verb=get,list
Create a Rolebinding like:
kubectl create rolebinding get-labels-rolebinding -n get-labels --clusterrole get-labels-clusterrole --serviceaccount get-labels:get-labels
Run a pod in the namespace you craeted like:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: get-labels
namespace: get-labels
spec:
serviceAccountName: get-labels
containers:
- image: centos:7
name: get-labels
command:
- /bin/bash
- -c
- tail -f /dev/null
EOF
Execute a shell in the running container like:
kubectl exec -it get-labels -n get-labels -- bash
Install jq tool in the container:
yum install epel-release -y && yum update -y && yum install jq -y
Set up shell variables:
# API Server Address
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
If you want to get a list of all nodes and their corresponding labels, then use the following command:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes | jq '.items[].metadata | {name,labels}'
else, if you want the labels corresponding to a particular node then use:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes/<nodename> | jq '.metadata.labels'
Please replace <nodename> with the name of node intended.
N.B. You can choose to include the installation of the jq tool in the Dockerfile from which your container image is built and make use of environment variables for the shell variables. We have used neither in this answer in order to explain the working of this method.

How to apply the changes of a Linux users group assignments inside a local ansible playbook?

I´m trying to install docker and create a docker image within a local ansible playbook containing multiple plays, adding the user to docker group in between:
- hosts: localhost
connection: local
become: yes
gather_facts: no
tasks:
- name: install docker
ansible.builtin.apt:
update_cache: yes
pkg:
- docker.io
- python3-docker
- name: Add current user to docker group
ansible.builtin.user:
name: "{{ lookup('env', 'USER') }}"
append: yes
groups: docker
- name: Ensure that docker service is running
ansible.builtin.service:
name: docker
state: started
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Create docker container
community.docker.docker_container:
image: ...
name: ...
When executing this playbook with ansible-playbook I´m getting a permission denied error at the "Create docker container" task. Rebooting and calling the playbook again solves the error.
I have tried manually executing some of the commands suggested here and executing the playbook again which works, but I´d like to do everything from within the playbook.
Adding a task like
- name: allow user changes to take effect
ansible.builtin.shell:
cmd: exec sg docker newgrp `id -gn`
does not work.
How can I refresh the Linux user group assignments from within the playbook?
I´m on Ubuntu 18.04.

How to install custom plugin for Grafana running in Kubernetes cluster on Azure

I have configured a Kubernetes cluster on Microsoft Azure and installed a Grafana helm chart on it.
In a directory on my local computer, I have a custom Grafana plugin that I developed in the past and I would like to install it in Grafana running on the Cloud.
Is there a way to do that?
You can use an initContainer like this:
initContainers:
- name: local-plugins-downloader
image: busybox
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -euo pipefail
mkdir -p /var/lib/grafana/plugins
cd /var/lib/grafana/plugins
for url in http://192.168.95.169/grafana-piechart-panel.zip; do
wget --no-check-certificate $url -O temp.zip
unzip temp.zip
rm temp.zip
done
volumeMounts:
- name: storage
mountPath: /var/lib/grafana
You need to have an emptyDir volume called storage in the pod, this is the default if you use the helm chart.
Then it needs to be mounted on the grafana's container. You also need to make sure that the grafana plugin directory is /var/lib/grafana/plugins

Resources