Skaffold wont deploy Kustomize resources - kustomize

I'm using Skaffold to deploy an application and the version which I use is v2.0.1. The apiVersion is skaffold/v3 and I'm using the below command to deploy it. The helm resources are deployed but not the Kustomize resources.
skaffold run -p profile-one
The Skaffold Config file looks like
apiVersion: skaffold/v3
kind: Config
metadata:
name: nginx-app
manifests:
helm:
releases:
- ...
...
deploy:
helm: {}
portForward:
- ...
...
profiles:
- name: profile-one
patches:
....
....
manifests:
kustomize:
paths:
- ./path/to/kustomize/files
deploy:
kubeContext: my-context
But it doesn't seems to deploy the actual k8s resources. Any idea?

Related

Skaffold setValues is getting missing helm values

Skaffold setValues is getting missing helm values.
Instead of setValues, when I save the relevant values in the values.yml file and use valuesFiles, there is no problem and the rendering is successful.
I guess setValues doesn't recognize nested arrays. Please review the example below.
Why the ingress.tls[0].hosts doesn't exist?
skaffold.yaml
apiVersion: skaffold/v2beta29
kind: Config
build:
local:
push: false
tagPolicy:
sha256: {}
artifacts:
- image: example
jib: {}
sync:
auto: false
deploy:
helm:
releases:
- name: example
chartPath: backend-app
artifactOverrides:
image: example
imageStrategy:
helm: {}
setValues:
ingress:
enabled: true
className: nginx
hosts:
- host: example.minikube
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: example-tls
hosts:
- example.minikube
skaffold run
skaffold run -v TRACE
# Output
[...]
[...]
[...]
DEBU[0106] Running command: [
helm --kube-context minikube install example backend-app --set-string image.repository=example,image.tag=6ad72230060e482fef963b295c0422e8d2f967183aeaca0229838daa7a1308c3 --set ingress.className=nginx --set --set ingress.enabled=true --set ingress.hosts[0].host=example.minikube --set ingress.hosts[0].paths[0].path=/ --set ingress.hosts[0].paths[0].pathType=ImplementationSpecific --set ingress.tls[0].secretName=example-tls] subtask=0 task=Deploy
[...]
[...]
[...]
Ingress Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
ingressClassName: nginx
tls:
- hosts:
secretName: example-tls
rules:
- host: "example.minikube"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: example
port:
number: 80
This was fixed recently via the PR here:
https://github.com/GoogleContainerTools/skaffold/pull/8152
This is currently in skaffold in main and will be available in the v2.1.0 Skaffold release (to be released 12/7/2022) and onward
EDIT: v2.1.0 release is delayed w/ some of the maintainers on holiday. Currently planned to be available late Dec or early Jan
EDIT #2: v2.1.0 is out now (1/23/2023)

New images are not being deployed to AKS

I've split out the initial azure-pipelines.yml to use templates, iteration, etc... For whatever reason, the new images are not being deployed despite using latest tag and/or imagePullPolicy: Always.
Also, I basically have two pipelines PR and Release:
PR is triggered when a PR request is submitted to merge to production. It automatically triggers this pipeline to run unit tests, build the Docker image, do integration tests, etc. and then pushes the image to ACR if everything passed.
When the PR pipeline is passing, and the PR is approved, it is merged into production which then triggers the Release pipeline.
Here is an example of one of my k8s deployment manifests (the pipeline says unchanged when these are applied):
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
imagePullPolicy: Always
image: appacr.azurecr.io/app-admin-v2:latest
ports:
- containerPort: 4001
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
And here are the various pipeline related .yamls I've been splitting out:
Both PR and Release:
# templates/variables.yaml
variables:
dockerRegistryServiceConnection: '<GUID>'
imageRepository: 'app'
containerRegistry: 'appacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)'
tag: '$(Build.BuildId)'
imagePullSecret: 'appacr1c5a-auth'
vmImageName: 'ubuntu-latest'
PR:
# pr.yaml
trigger: none
resources:
- repo: self
pool:
vmIMage: $(vmImageName)
variables:
- template: templates/variables.yaml
stages:
- template: templates/changed.yaml
- template: templates/unitTests.yaml
- template: templates/build.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
- template: templates/integrationTests.yaml
# templates/build.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- ${{ each service in parameters.services }}:
- task: Docker#2
displayName: Build and push an ${{ service }} image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ service }}
dockerfile: $(dockerfilePath)/${{ service }}/Dockerfile
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Release:
# release.yaml
trigger:
branches:
include:
- production
resources:
- repo: self
variables:
- template: templates/variables.yaml
stages:
- template: templates/publish.yaml
- template: templates/deploy.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
# templates/deploy.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Deploy
displayName: Deploy stage
dependsOn: Publish
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'App Production AKS'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
kubernetesServiceConnection: 'App Production AKS'
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- ${{ each service in parameters.services }}:
- task: KubernetesManifest#0
displayName: Deploy to ${{ service }} Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'App Production AKS'
manifests: |
$(Pipeline.Workspace)/k8s/aks/${{ service }}.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository)-${{ service }}:$(tag)
Both PR and Release pass...
The new images are in ACR...
I've pulled the images to verify they have the latest changes...
They just aren't getting deployed to AKS.
Any suggestions for what I am doing wrong here?
For whatever reason, the new images are not being deployed despite using latest tag
How should Kubernetes know that there is a new image? Kubernetes config is declarative. Kubernetes is already running what once was "latest" image.
Here is an example of one of my k8s deployment manifests (the pipeline says unchanged when these are applied)
Yeah, it is unchanged because the declarative disered state has not changed. The Deployment-manifest states what should be deployed, it is not a command.
Proposed solution
Whenever you build an image, always give it a unique name. And whenever you want to deploy something, always set a unique name of what should be running - then Kubernetes will manage this in an elegant zero-downtime way using rolling deployments - unless you configure it to behave different.
In your deployment you pull
image: appacr.azurecr.io/app-admin-v2:latest
Since there is no hash but simply the tag latest referenced the deployment says:
"You want latest? I have latest running!".
Important part is the "running". The pull policy always doesn't help if there is no need to pull in the first place.
Potential solutions:
Change something in your deployment that will cause a redeployment of the pods. Then it will actually pull the image again.
Cleaner solution: Don't use latest! Use some semantic versioning or date or whatever strategy matches your approach. In this case the tag will change always and it will always pull that image.

How to start docker container with dynamic url

My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200

applying changes to pod code source realtime - npm

I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?
Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.
what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1

Docker VotingApp build/release Jenkins on Kubernetes not idempotent

I'm trying out deployments on Kubernetes via Jenkins with the Docker Voting App. I use the Azure Container registry as a repository for the docker images. On first try all is deployed ok:
When I re-run the pipeline without changing something I get the following error:
Redis service definition:
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: redis
version: alpine
name: redis
selfLink: /api/v1/namespaces//services/redis
spec:
clusterIP:
ports:
- name:
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
version: alpine
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
The docker images are build with "latest" tag.
stage 'Checkout'
node {
git 'https://github.com/*****/example-voting-app.git' // Checks out example votiung app repository
stage 'Docker Builds'
docker.withRegistry('https://*****.azurecr.io', 'private-login') {
parallel(
"Build Worker App":{def myEnv = docker.build('*****.azurecr.io/example-voting-app-worker:latest', 'worker').push('latest')},
"Build Result App":{def myEnv = docker.build('*****.azurecr.io/example-voting-app-result:latest', 'result').push('latest')},
"Build Vote App":{def myEnv = docker.build('*****.azurecr.io/example-voting-app-vote:latest', 'vote').push('latest')}
)
}
stage 'Kubernetes Deployment'
sh 'kubectl apply -f kubernetes/basic-full-deployment.yml'
sh 'kubectl delete pods -l app=vote'
sh 'kubectl delete pods -l app=result'
stage 'Smoke Test'
sh 'kubectl get deployments'
}
Your definition contains fields that are auto-generated/managed by the apiserver. Some of them are created at the time of object creation and can't be updated afterwards. Remove the following fields from your file to make it happy:
metadata:
creationTimestamp: null
selfLink: /api/v1/namespaces//services/redis
status:
loadBalancer: {}

Resources