Skaffold setValues is getting missing helm values - skaffold

Skaffold setValues is getting missing helm values.
Instead of setValues, when I save the relevant values in the values.yml file and use valuesFiles, there is no problem and the rendering is successful.
I guess setValues doesn't recognize nested arrays. Please review the example below.
Why the ingress.tls[0].hosts doesn't exist?
skaffold.yaml
apiVersion: skaffold/v2beta29
kind: Config
build:
local:
push: false
tagPolicy:
sha256: {}
artifacts:
- image: example
jib: {}
sync:
auto: false
deploy:
helm:
releases:
- name: example
chartPath: backend-app
artifactOverrides:
image: example
imageStrategy:
helm: {}
setValues:
ingress:
enabled: true
className: nginx
hosts:
- host: example.minikube
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: example-tls
hosts:
- example.minikube
skaffold run
skaffold run -v TRACE
# Output
[...]
[...]
[...]
DEBU[0106] Running command: [
helm --kube-context minikube install example backend-app --set-string image.repository=example,image.tag=6ad72230060e482fef963b295c0422e8d2f967183aeaca0229838daa7a1308c3 --set ingress.className=nginx --set --set ingress.enabled=true --set ingress.hosts[0].host=example.minikube --set ingress.hosts[0].paths[0].path=/ --set ingress.hosts[0].paths[0].pathType=ImplementationSpecific --set ingress.tls[0].secretName=example-tls] subtask=0 task=Deploy
[...]
[...]
[...]
Ingress Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
ingressClassName: nginx
tls:
- hosts:
secretName: example-tls
rules:
- host: "example.minikube"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: example
port:
number: 80

This was fixed recently via the PR here:
https://github.com/GoogleContainerTools/skaffold/pull/8152
This is currently in skaffold in main and will be available in the v2.1.0 Skaffold release (to be released 12/7/2022) and onward
EDIT: v2.1.0 release is delayed w/ some of the maintainers on holiday. Currently planned to be available late Dec or early Jan
EDIT #2: v2.1.0 is out now (1/23/2023)

Related

How to change, edit, save /etc/hosts file from Azure Bash for Kubernetes?

I uploaded a project to kubernetes and for its gateway to redirect the services, it requires the following:
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on.
I can't run any sudo commands, so sudo nano /etc/hosts doesnt work. I tried vi /etc/hosts and it gives permission denied error. How can I edit /etc/hosts file or do some configuration on Azure to make it work like that?
Edit:
To give more information, I have uploaded a project to Kubernetes that has reverse-proxy settings.
So reaching the web-app of that project is not available via IP. Instead if I'm running the application locally, I have to edit the hosts file of the computer I'm using with
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on. So whenever I type web-app.my.project its gateway redirects to web-app part and if I write app.my.project it redirects to app part, etc.
When I uploaded it to Azure Kubernetes Service it added default-http-backend on ingress-nginx namespace which created by itself. To expose these services, I opened the Http Routing option from Azure which gave me the loadbalancer at the left side of the image. So If I'm reading the situation correctly, (I'm most probably wrong though) it is something like the image below:
So, I added hostaliases to kube-system, ingress-nginx and default namespaces to make it like I edited a hosts file when I was running the project locally. But it still gives me that default backend - 404 ingress error
Edit 2:
I have nginx-ingress-controller which allows the redirection as far as I understand. So, when I add hostaliases to it as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.ota.local"
- "gateway.ota.local"
- "treehub.ota.local"
- "tuf-reposerver.ota.local"
- "web-events.ota.local"
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: {{ .ingress_controller_docker_image }}
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: tcp
containerPort: 8000
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
So when I edit yaml file as aforementioned, it gives the following error on Azure:
Failed to update the deployment
Failed to update the deployment 'nginx-ingress-controller'. Error: BadRequest (400) : Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|liases":["ip "127.0|..., bigger context ...|theus.io/scrape":"true"}},"spec":{"hostAliases":["ip "127.0.0.1""],"hostnames":["app.ota.local","g|...
If I edit the yaml file locally and try to run it from local kubectl which is connected to Azure, it gives the following error:
serviceaccount/weave-net configured
clusterrole.rbac.authorization.k8s.io/weave-net configured
clusterrolebinding.rbac.authorization.k8s.io/weave-net configured
role.rbac.authorization.k8s.io/weave-net configured
rolebinding.rbac.authorization.k8s.io/weave-net configured
daemonset.apps/weave-net configured Using cluster from kubectl
context: k8s_14
namespace/ingress-nginx unchanged deployment.apps/default-http-backend
unchanged service/default-http-backend unchanged
configmap/nginx-configuration unchanged configmap/tcp-services
unchanged configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole
unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding
unchanged
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding
unchanged error: error validating
"/home/.../ota-community-edition/scripts/../generated/templates/ingress":
error validating data: ValidationError(Deployment.spec.template.spec):
unknown field "hostnames" in io.k8s.api.core.v1.PodSpec; if you choose
to ignore these errors, turn validation off with --validate=false
make: *** [Makefile:34: start_start-all] Error 1
Adding entries to a
Pod's /etc/hosts file provides Pod-level override of hostname
resolution when DNS and other options are not applicable. You can add
these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file
is managed by the kubelet and can be overwritten on during Pod
creation/restart.
I suggest that you use hostAliases instead
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.my.project"
- "event-manager.my.project"
- "logger.my.project"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"

Unable to access local certificate on kubernetes cluster

I have a node application running in a container that works well when I run it locally on docker.
When I try to run it in my k8 cluster, I get the following error.
kubectl -n some-namespace logs --follow my-container-5d7dfbf876-86kv7
> code#1.0.0 my-container /src
> node src/app.js
Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1486:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:921:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:695:12) {
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY'
}
This is strange as the only I run the container with
command: ["npm", "run", "consumer"]
I have also tried adding to my Dockerfile
npm config set strict-ssl false
as per the recommendation here: npm install error - unable to get local issuer certificate but it doesn't seem to help.
So it should be trying to authenticate this way.
I would appreciate any pointers on this.
Here is a copy of my .yaml file for completeness.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: label
name: label
namespace: some-namespace
spec:
replicas: 1
selector:
matchLabels:
name: lable
template:
metadata:
labels:
name: label
spec:
containers:
- name: label
image: some-registry:latest
resources:
limits:
memory: 7000Mi
cpu: '3'
ports:
- containerPort: 80
command: ["npm", "run", "application"]
env:
- name: "DATABASE_URL"
valueFrom:
secretKeyRef:
name: postgres
key: DBUri
- name: "DEBUG"
value: "*,-babel,-mongo:*,mongo:queries,-http-proxy-agent,-https-proxy-agent,-proxy-agent,-superagent,-superagent-proxy,-sinek*,-kafka*"
- name: "ENV"
value: "production"
- name: "NODE_ENV"
value: "production"
- name: "SERVICE"
value: "consumer"
volumeMounts:
- name: certs
mountPath: /etc/secrets
readOnly: true
volumes:
- name: certs
secret:
secretName: certs
items:
- key: certificate
path: certificate
- key: key
path: key
It looks that the pod is not mounting the secrets in the right place. Make sure that .spec.volumeMounts.mountPath is pointing on the right path for the container image.

GKE not able to pull image from artifactory

I am using GKE, gitlab and JFrog to achieve CI and CD. All my steps work, but my deployment to GKE fails as its not able to poll my image. Image does exists. I have given below deployment, yaml and error message.
Below is my deployment file, i have hardcoded the image to be clear that image does exists.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: go
name: hello-world-go
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: go
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
type: RollingUpdate
template:
metadata:
labels:
app: go
spec:
containers:
-
# image: "<IMAGE_NAME>"
image: cicd-docker-local.jfrog.io/stage_proj:56914646
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
name: go
ports:
-
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
imagePullSecrets:
- name: my.secret
Below is my gitlab yaml file snippet
deploy to dev:
stage: Dev-Deploy
image: dtzar/helm-kubectl
script:
- kubectl config set-cluster mycluster --server="$KUBE_URL" --insecure-skip-tls-verify=true
- kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
- kubectl config set-context default --cluster=mycluster --user=admin
- kubectl config use-context default; sleep 10
- kubectl delete secret my.secret
- kubectl create secret docker-registry my.secret --docker-server=$ARTIFACTORY_DOCKER_REPOSITORY --docker-username=$ARTIFACTORY_USER --docker-password=$ARTIFACTORY_PASS --docker-email="abc#gmail.com"
- echo ${STAGE_CONTAINER_IMAGE}
- kubectl apply -f deployment.yaml
- kubectl rollout status -w "deployment/hello-world-go"
# - kubectl rollout status -f deployment.yaml
- kubectl get all,ing -l app='hello-world-go'
only:
- master
I get error like below in GKE.
Cannot pull image 'cicd-docker-local.jfrog.io/stage_proj:56914646' from the registry.

Kubernetes: Cassandra(stateful set) deployment on GCP

Has anyone tried deploying Cassandra (POC) on GCP using kubernetes (not GKE). If so can you please share info on how to get it working?
You could start by looking at IBM's Scalable-Cassandra-deployment-on-Kubernetes.
For seeds discovery you can use a headless service, similar to this Multi-node Cassandra Cluster Made Easy with Kubernetes.
Some difficulties:
fast local storage for K8s is still in beta; of course, you can use what k8s already has; there are some users reporting that they use Ceph RBD with 8 C* nodes each of them having 2TB of data on K8s.
at some point in time you will realize that you need a C* operator - here is some good startup - Instaclustr's Cassandra Operator and Pantheon Systems' Cassandra Operator
you need a way to scale in gracefully stateful applications (should be also covered by the operator; this is a solution if you don't want an operator, but you still need to use a controller).
You could also check the Cassandra mailing list, since there are people there already using Cassandra over K8s in production.
I have implemented cassandra on kubernetes. Please find my deployment and service yaml files:
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v12
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "fast"
resources:
requests:
storage: 5Gi
Hope this helps.
Use Helm:
On Mac:
brew install helm#2
brew link --force helm#2
helm init
To Avoid Kubernetes Helm permission Hell:
from: https://github.com/helm/helm/issues/2224:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Cassandra
incubator:
helm repo add https://github.com/helm/charts/tree/master/incubator/cassandra
helm install --namespace "cassandra" -n "cassandra" incubator/cassandra
helm status "cassandra"
helm delete --purge "cassandra"
bitnami:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install --namespace "cassandra" -n "my-deployment" bitnami/cassandra
helm status "my-deployment"
helm delete --purge "my-deployment"

Configuring Azure log analytics

I am following this documentation https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-monitor to configure a monitoring solution on AKS with the following yaml file
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: omsagent
spec:
template:
metadata:
labels:
app: omsagent
agentVersion: 1.4.0-12
dockerProviderVersion: 10.0.0-25
spec:
containers:
- name: omsagent
image: "microsoft/oms"
imagePullPolicy: Always
env:
- name: WSID
value: <WSID>
- name: KEY
value: <KEY>
securityContext:
privileged: true
ports:
- containerPort: 25225
protocol: TCP
- containerPort: 25224
protocol: UDP
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /var/opt/microsoft/omsagent/state/containerhostname
name: container-hostname
- mountPath: /var/log
name: host-log
- mountPath: /var/lib/docker/containers/
name: container-log
livenessProbe:
exec:
command:
- /bin/bash
- -c
- ps -ef | grep omsagent | grep -v "grep"
initialDelaySeconds: 60
periodSeconds: 60
nodeSelector:
beta.kubernetes.io/os: linux
# Tolerate a NoSchedule taint on master that ACS Engine sets.
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: container-hostname
hostPath:
path: /etc/hostname
- name: host-log
hostPath:
path: /var/log
- name: container-log
hostPath:
path: /var/lib/docker/containers/
This fails with an error
error: error converting YAML to JSON: yaml: line 65: mapping values are not allowed in this context
I've verified that the file is syntactically correct using a yaml validator, no sure whats wrong?
This is kubernetes version 1.7
This also happens with version 1.9
That yaml file works for me:
[root#jasoncli#jasonye aksoms]# vi oms-daemonset.yaml
[root#jasoncli#jasonye aksoms]# kubectl create -f oms-daemonset.yaml
daemonset "omsagent" created
[root#jasoncli#jasonye aksoms]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
omsagent 1 1 0 1 0 beta.kubernetes.io/os=linux 1m
Please check your kubectl client version with this command kubectl version, here is my output:
[root#jasoncli#jasonye aksoms]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.5", GitCommit:"cce11c6a185279d037023e02ac5249e14daa22bf", GitTreeState:"clean", BuildDate:"2017-12-07T16:16:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
You can run the following command az aks install-cli to install kubectl client locally.
More information about install kubernetes command-line client, please refer to this article.

Resources