GKE not able to pull image from artifactory - gitlab

I am using GKE, gitlab and JFrog to achieve CI and CD. All my steps work, but my deployment to GKE fails as its not able to poll my image. Image does exists. I have given below deployment, yaml and error message.
Below is my deployment file, i have hardcoded the image to be clear that image does exists.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: go
name: hello-world-go
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: go
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
type: RollingUpdate
template:
metadata:
labels:
app: go
spec:
containers:
-
# image: "<IMAGE_NAME>"
image: cicd-docker-local.jfrog.io/stage_proj:56914646
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
name: go
ports:
-
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
imagePullSecrets:
- name: my.secret
Below is my gitlab yaml file snippet
deploy to dev:
stage: Dev-Deploy
image: dtzar/helm-kubectl
script:
- kubectl config set-cluster mycluster --server="$KUBE_URL" --insecure-skip-tls-verify=true
- kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
- kubectl config set-context default --cluster=mycluster --user=admin
- kubectl config use-context default; sleep 10
- kubectl delete secret my.secret
- kubectl create secret docker-registry my.secret --docker-server=$ARTIFACTORY_DOCKER_REPOSITORY --docker-username=$ARTIFACTORY_USER --docker-password=$ARTIFACTORY_PASS --docker-email="abc#gmail.com"
- echo ${STAGE_CONTAINER_IMAGE}
- kubectl apply -f deployment.yaml
- kubectl rollout status -w "deployment/hello-world-go"
# - kubectl rollout status -f deployment.yaml
- kubectl get all,ing -l app='hello-world-go'
only:
- master
I get error like below in GKE.
Cannot pull image 'cicd-docker-local.jfrog.io/stage_proj:56914646' from the registry.

Related

How to deploy nginx.config file in kubernetes

Recently I started to study Kubernetes and right now I can deploy nginx with default options.
But how I can deploy my nginx.conf in Kubernetes ?
Maybe somebody have a simple example ?
Create yaml for nginx deployment:
kubectl run --image=nginx nginx -o yaml --dry-run
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Create config ConfigMap with nginx configuration
kubectl create configmap nginx-conf --from-file=./nginx.conf
Mount file:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-conf
subPath: nginx.conf
volumes:
- configMap:
name: nginx-conf
name: nginx-conf
You can build your own image on top of nginx default image, then copy your own nginx.conf into that image.
Once the image is created, you can push it to Dockerhub/your own private repository, and then use that image instead of the default nginx one in kubernetes.
This answer covers the process of creating a custom nginx image.
Once the image can be pulled into your kubernetes cluster you can deploy it to your cluster like so -
kubectl run nginx --image=<your-docker-hub-username>/<customr-nginx>

Getting a validtion error when trying to apply a Yaml file in AKS

I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!

Elasticsearch Pod failing after Init state without logs

I'm trying to get an Elasticsearch StatefulSet to work on AKS but the pods fail and are terminated before I'm able to see any logs. Is there a way to see the logs after the Pods are terminated?
This is the sample YAML file I'm running with kubectl apply -f es-statefulset.yaml:
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v6.4.1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v6.4.1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v6.4.1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: "1000m"
memory: "2048Mi"
requests:
cpu: "100m"
memory: "1024Mi"
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "bootstrap.memory_lock"
value: "true"
- name: "ES_JAVA_OPTS"
value: "-Xms1024m -Xmx2048m"
- name: "discovery.zen.ping.unicast.hosts"
value: "elasticsearch-logging"
# A) This volume mount (emptyDir) can be set whenever not working with a
# cloud provider. There will be no persistence. If you want to avoid
# data wipeout when the pod is recreated make sure to have a
# "volumeClaimTemplates" in the bottom.
# volumes:
# - name: elasticsearch-logging
# emptyDir: {}
#
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
# B) This will request storage on Azure (configure other clouds if necessary)
volumeClaimTemplates:
- metadata:
name: elasticsearch-logging
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: default
resources:
requests:
storage: 64Gi
When I "follow" the pods creating looks like this:
I tried to get the logs from the terminated instance by doing logs -n kube-system elasticsearch-logging-0 -p and noting.
I'm trying to build on top of this sample from the official
(unmaintained) k8s repo. Which worked at first, but after I tried updating the deployment I had it just completely failed and I haven't been able to get it back up. I'm using the trial version of Azure AKS
I appreciate any suggestions
EDIT 1:
The result of kubectl describe statefulset elasticsearch-logging -n kube-system is the following (with an almost identical Init-Terminated pod flow):
Name: elasticsearch-logging
Namespace: kube-system
CreationTimestamp: Mon, 24 Sep 2018 10:09:07 -0600
Selector: k8s-app=elasticsearch-logging,version=v6.4.1
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=elasticsearch-logging
kubernetes.io/cluster-service=true
version=v6.4.1
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"elasticsea...
Replicas: 0 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=elasticsearch-logging
kubernetes.io/cluster-service=true
version=v6.4.1
Service Account: elasticsearch-logging
Init Containers:
elasticsearch-logging-init:
Image: alpine:3.6
Port: <none>
Host Port: <none>
Command:
/sbin/sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts: <none>
Containers:
elasticsearch-logging:
Image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
cpu: 1
memory: 2Gi
Requests:
cpu: 100m
memory: 1Gi
Environment:
NAMESPACE: (v1:metadata.namespace)
bootstrap.memory_lock: true
ES_JAVA_OPTS: -Xms1024m -Xmx2048m
discovery.zen.ping.unicast.hosts: elasticsearch-logging
Mounts:
/data from elasticsearch-logging (rw)
Volumes: <none>
Volume Claims:
Name: elasticsearch-logging
StorageClass: default
Labels: <none>
Annotations: <none>
Capacity: 64Gi
Access Modes: [ReadWriteMany]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 53s statefulset-controller create Pod elasticsearch-logging-0 in StatefulSet elasticsearch-logging successful
Normal SuccessfulDelete 1s statefulset-controller delete Pod elasticsearch-logging-0 in StatefulSet elasticsearch-logging successful
The flow remains the same:
You're assuming that the pods are terminated due to an ES related error.
I'm not so sure ES even got to run to begin with, which should explain the lack of logs.
Having multiple pods with the same name is extremely suspicious, especially in a StatefulSet, so something's wrong there.
I'd try kubectl describe statefulset elasticsearch-logging -n kube-system first, that should explain what's going on -- probably some issue mounting the volumes prior to running ES.
I'm also pretty sure you want to change ReadWriteOnce to ReadWriteMany.
Hope this helps!
Yes. There's a way. You can ssh into the machine running your pods, and assuming you are using Docker you can run:
docker ps -a # Shows all the Exited containers (some of those, part of your pod)
Then:
docker logs <container-id-of-your-exited-elasticsearch-container>
This also works if you are using CRIO or Containerd and it would be something like
crictl logs <container-id>

Pods do not resolve the domain names of a service through ingress

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Pull image Azure Container Registry - Kubernetes

Does anyone have any advice on how to pull from Azure container registry whilst running within Azure container service (kubernetes)
I've tried a sample deployment like the following but the image pull is failing:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
I got this working after reading this info.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
So firstly create the registry access key
kubectl create secret docker-registry myregistrykey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
Replacing the server address with the address of your ACR address and the USERNAME, PASSWORD and EMAIL address with the values from the admin user for your ACR. Note: The email address can be value.
Then in the deploy you simply tell kubernetes to use that key for pulling the image like so:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
imagePullSecrets:
- name: myregistrykey
This is something we've actually made easier. When you provision a Kubernetes cluster through the Azure CLI, a service principal is created with contributor privileges. This will enable pull requests of any Azure Container Registry in the subscription.
There was a PR: https://github.com/kubernetes/kubernetes/pull/40142 that was merged into new deployments of Kubernetes. It won't work on existing kubernetes instances.

Resources