I've installed gitlab on my k8s cluster using a helm chart.
Now I would like to create a local backup. As far as I found in the docs, there is the toolbox, in which I could just run backup-utility. But this works only for backups, which are stored in S3. I just need a local backup file in a specific directory.
So I came up with gitlab-rake gitlab:backup:create - also in the toolbox pod. It works, as I can find the file in /srv/gitlab/tmp/backups.
I didn't find an option where the backups should be moved to.
In omnibus installation there are config options like
gitlab_rails['backup_path'] = '/mnt/backups'
in /etc/gitlab/gitlab.rb and then running sudo gitlab-ctl reconfigure.
How should I configure this in a helm chart installation?
global:
edition: ce
gitlab:
toolbox:
enabled: true
replicas: 1
backups:
cron:
enabled: true
persistence:
enabled: true
storageClass: longhorn
accessMode: ReadWriteOnce
size: 10Gi
persistence:
enabled: true
storageClass: longhorn
accessMode: 'ReadWriteOnce'
size: '10Gi'
Related
I have two Grafana Loki installations. Done with helm from official repository.
Both are exact the same configured (expecting DNS)
The only difference is, one is on Azure and one is on own Esxi.
The problem I have is the log file parsing. The installation on Azure seems to parse the log files always with - cri {} settings and not with - docker {}
A quick search inside the promtail pods show me inside the promtail.yaml the - docker {} setting. But I always get the output:
2023-01-16 10:39:15
2023-01-16T09:39:15.604384089Z stdout F {"level":50,"time":1673861955603,"service
On our own Esxi I have the correct:
2023-01-13 16:58:18
{"level":50,"time":1673625498068,"service"
From what I read the stdout F is -cri {} parsing, default by promtail.
Any idea why this happen? My installation yaml is:
#helm upgrade --install loki --namespace=monitoring grafana/loki-stack -f value_mus.yaml
grafana:
enabled: true
admin:
existingSecret: grafana-admin-credentials
sidecar:
datasources:
enabled: true
maxLines: 1000
image:
tag: latest
persistence:
enabled: true
size: 10Gi
storageClassName: managed-premium
accessModes:
- ReadWriteOnce
grafana.ini:
users:
default_theme: light
server:
domain: xxx
smtp:
enabled: true
from_address: xxx
from_name: Grafana Notification
host: xxx
user: xxx
password: xxx
skip_verify: false
startTLS_policy:
promtail:
enabled: true
config:
snippets:
pipelineStages:
- docker: {}
Any help will be welcome.
We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]
I have Nginx-ingress AKS clusters which are created using helm chart. But those AKS clusters are reporting critical as the beta.kubernetes.io/os is configured with linuxcode value. I tried to fix this using the helm upgrade command. But I don't know the helm chart location.
Is there any way I can update it without reinstalling the Nginx-ingress controller?
This is what I get from chart values.yaml by runing helm get values nginx-ingress:
'''USER-SUPPLIED VALUES:
controller:
nodeSelector:
beta.kubernetes.io/os: linux
replicaCount: 2
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
loadBalancerIP: 11.1.0.220
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linuxcode'''
And when I tried this helm upgrade command:
helm upgrade nginx-ingress nginx-ingress-1.41.3 -n dev -f values.yml
But it fails with error message:
**Error: failed to download "nginx-ingress-1.41.3" (hint: running `helm repo update` may help)**
Does windows minikube support a persistent volume with a hostpath? If so what is the syntax?
I tried:
apiVersion: v1
kind: PersistentVolume
metadata:
name: kbmongo002
labels:
type: local
spec:
storageClassName: mongostorageclass
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
This passed validation and created the PV and a PVC claimed it, but nothing was written to my expected location of C:\temp\mongo
I also tried:
hostPath:
path: "c:/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
That resulted in:
Error: Error response from daemon: Invalid bind mount spec
"c:/temp/mongo:/data/db": invalid mode: /data/db
Error syncing pod
If you use virtualbox in windows, only the c:/Users is mapped into vm as /c/Users which is kubernetes system can access. It is the feature in Virtualbox.
Minikube use VM to simulate the kubernetes VM.
Minikube provides mount feature as well, not so user-friendly for persitency.
You can try choose one of the solutions below
use folders under /c/Users for your yaml file
map extra folders into virtualbox VM like C:\Users
use minikube mount, see host folder mount
I have tried k8s hostpath on windows, it works well.
You should use drive letter in pod mount path, see example: https://github.com/andyzhangx/Demo/blob/master/windows/azuredisk/aspnet-pod-azuredisk.yaml#L14
As there is a docker mount path related bug on windows, you need to use drive letter as mount path in pod, see issue: https://github.com/moby/moby/issues/34729
I need to be able to use the batch/v2alpha1 and apps/v1alpha1 on k8s. Currently, I'm running a cluster with 1.5.0-beta.1 installed. I would prefer to do this in the deployment script, but all I can find are the fields
"apiVersionDefault": "2016-03-30",
"apiVersionStorage": "2015-06-15",
And nowhere can I find anything about what dates to use to update those. There are also some instructions in the kubernetes docs which explain how to use the --runtime-config flag on the kubes-apiserver.. so follow those, I ssh'd into master, found the kube-apiserver manifest file and edited it to look like such:
apiVersion: "v1"
kind: "Pod"
metadata:
name: "kube-apiserver"
namespace: "kube-system"
labels:
tier: control-plane
component: kube-apiserver
spec:
hostNetwork: true
containers:
- name: "kube-apiserver"
image: "gcr.io/google_containers/hyperkube-amd64:v1.5.0-beta.1"
command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:4001"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--v=4"
- "--runtime-config=batch/v2alpha1,apps/v1alpha1"
volumeMounts:
- name: "etc-kubernetes"
mountPath: "/etc/kubernetes"
- name: "var-lib-kubelet"
mountPath: "/var/lib/kubelet"
volumes:
- name: "etc-kubernetes"
hostPath:
path: "/etc/kubernetes"
- name: "var-lib-kubelet"
hostPath:
path: "/var/lib/kubelet"
That pretty much nuked my cluster.. so I'm at a complete loss now. I'm going to have to rebuild the cluster, so I'd prefer to add this in the deployment template, but really any help would be appreciated.
ACS-Engine clusters allow the ability to specify most any options you desire to override - see this document for the cluster definitions. I don't think a post-deployment script exists because there are no common options you want to change with the apicontroller and other k8s components after a deployment other than K8s version upgrades. For this purpose there are scripts included in ACS-Engine and other options for various cloud providers and flavors of kubernetes (i.e. Tectonic has a mechanism for auto-upgrades).
To manually override the values after the deployment of an ACS-Engine deployed K8s cluster, you can manually update the manifests here:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
And also update the values in the kubelet here (i.e. to update the version of kubernetes): /etc/default/kubelet
Of course you'll want to kubectl drain your nodes before making these changes, reboot the node, and once the node comes back online and is running properly kubectl uncordon the node.
Hard to say why your cluster was nuked without knowing more information. In general, I'd say it is probably best if you are making lots of changes to apiversions and configurations you probably want a new cluster.