Wrong image used by skaffold in deployment - skaffold

Here is my skaffold.yaml
apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: myimage
build:
artifacts:
- image: myimage
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- k8s/deployment_auth.yaml
And here is k8s/deployment_auth.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myimage
image: myimage:latest
imagePullPolicy: Always
When I execute skaffold dev, I get
Starting deploy...
- deployment.apps/myapp created
Waiting for deployments to stabilize...myimage
- pod/myimage-5f74748bd6-ghvzh: creating container myimage
- deployment/myapp: container myimage is waiting to start: myimage:0127e9fb7b7b5bf9971f53c313c1c5c1877903ca5c194b5c315234cbf15191dc can't be pulled
- pod/myimage-5f74748bd6-ghvzh: container myimage is waiting to start: myimage:0127e9fb7b7b5bf9971f53c313c1c5c1877903ca5c194b5c315234cbf15191dc can't be pulled
- deployment/myapp failed. Error: container myimage is waiting to start: myimage:0127e9fb7b7b5bf9971f53c313c1c5c1877903ca5c194b5c315234cbf15191dc can't be pulled.
Why is that?
Where does skaffold come up with the image:hash pattern with?

Skaffold replaces image references with the built image, but the references must match exactly what is shown in the skaffold.yaml. Alter your deployment to:
spec:
containers:
- name: myimage
image: myimage # must match corresponding build.artifacts.image in skaffold.yaml
imagePullPolicy: Always

Related

How to deploy a .NET core console app to Azure Kubernetes Service?

I have a .NET core console application to crawl a database periodically at certain intervals. I have dockerized it and have been able to run the docker image successfully from my local system. My ultimate objective is to deploy it from AKS. So I have pushed the aforementioned image to Azure Container Registry also. Please help me figure out the next steps on how to deploy the image from ACR into AKS.
The Dockerfile used to create the docker image :
FROM mcr.microsoft.com/dotnet/runtime:5.0
COPY bin/Release/net5.0/publish/ App/
WORKDIR /App
ENTRYPOINT ["dotnet", "<app_name>.dll"]
The YAML file used to deploy to AKS :
apiVersion: apps/v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
Process: <app_name>
creationTimestamp: null
labels:
app: <app_name>
name: <app_name>
spec:
selector:
app: <app_name>
status:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
Process: <app_name>
creationTimestamp: null
labels:
app: <app_name>
name: <app_name>
spec:
replicas: 1
selector:
matchLabels:
app: <app_name>
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: <app_name>
spec:
containers:
- env:
image: "<acr_name>.azurecr.io/<image_name>:<version_tag>"
name: <app_name>
resources: {}
restartPolicy: Always
status: {}
kind: List
metadata: {}
I am relatively new to Docker technologies, and I am unsure whether this is the proper way to deploy .NET console apps to AKS or if this is the proper YAML configuration for a console app o deploy it to AKS. Please help me in figuring this out. Any help is appreciated, thanks in advance!
I guess you need to modify the docker file with the correct project path
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /app
# copy csproj and restore as distinct layers
# COPY *.sln .
COPY dotnet-app/*.csproj ./dotnet-app/
RUN dotnet restore dotnet-app
# copy everything else and build app
COPY dotnet-app/. ./dotnet-app/
WORKDIR /app/dotnet-app
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS runtime
WORKDIR /app
COPY --from=build /app/dotnet-app/out ./
ENTRYPOINT ["dotnet", "dotnet-app.dll"]
Here is a step by step instruction from the sample repository i built
I would start with a reference doc from Microsoft's documentation and then ask a more specific question.
Ref:
https://learn.microsoft.com/en-us/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/build-aspnet-core-applications-linux-containers-aks-kubernetes#push-the-image-into-the-azure-acr
Console App would be simpler than a web app. You will need to take out the port and service configs.
apiVersion: apps/v1
kind: Deployment
metadata:
name: console-app
labels:
app: console-app
spec:
replicas: 1
selector:
matchLabels:
app: console-app
template:
metadata:
labels:
app: console-app
spec:
containers:
- name: console-app
image: exploredocker.azurecr.io/console-app:v1
imagePullPolicy: IfNotPresent

k8s deploymet with image from private registry [duplicate]

This question already has answers here:
Pull image Azure Container Registry - Kubernetes
(2 answers)
Kubernetes pull from multiple private docker registries
(1 answer)
Can anyone please guide how to pull private images from Kubernetes?
(2 answers)
Closed 2 years ago.
I've k8s deployment yaml which I need to pull image
from private registry
where should I put the
host
user
password
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
I found this but it doesnt really helps
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
The doc contains detailed about pulling secrets from private registry.
Summary is,
Create secret using following command
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Than specify the secret name in your deployment file by adding following line
imagePullSecrets:
- name: regcred
So, Create a secret and modify your deployment like
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
imagePullSecrets:
- name: regcred
If you want to create secret from file then update following file into secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: <namespace>
data:
.dockerconfigjson: < add here output of cat ~/.docker/config.json| base64 -w 0 >
type: kubernetes.io/dockerconfigjson
then run kubectl apply -f secret.yaml

How to deploy nginx.config file in kubernetes

Recently I started to study Kubernetes and right now I can deploy nginx with default options.
But how I can deploy my nginx.conf in Kubernetes ?
Maybe somebody have a simple example ?
Create yaml for nginx deployment:
kubectl run --image=nginx nginx -o yaml --dry-run
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Create config ConfigMap with nginx configuration
kubectl create configmap nginx-conf --from-file=./nginx.conf
Mount file:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-conf
subPath: nginx.conf
volumes:
- configMap:
name: nginx-conf
name: nginx-conf
You can build your own image on top of nginx default image, then copy your own nginx.conf into that image.
Once the image is created, you can push it to Dockerhub/your own private repository, and then use that image instead of the default nginx one in kubernetes.
This answer covers the process of creating a custom nginx image.
Once the image can be pulled into your kubernetes cluster you can deploy it to your cluster like so -
kubectl run nginx --image=<your-docker-hub-username>/<customr-nginx>

Getting a validtion error when trying to apply a Yaml file in AKS

I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!

Directory not created after defining the mounting path for the configMap in a Deployment yaml

Heading
I was trying to add a directory for mounting the configMap in the Deployment yaml of the Kubernetes artifact. Though I defined the volumes and the mountpath within the container, in the pod exec, I couldn't find the directory?
My deployment file looks like this:
apiVersion: v1
kind: Deployment
metadata:
name: test-deployment
namespace: test
spec:
replicas: 1
template:
metadata:
labels:
app: reactor
spec:
containers:
- name: reactor
image: test/chain
imagePullPolicy: Always
volumeMounts:
- mountPath: /test
name: chaindata
ports:
- containerPort: 3003
volumes:
- configMap:
name: chaindata
defaultMode: 420
name: chaindata
I found my error, I was actually pointing to my app directory but the directory was already created in the root folder.
Hence, I have changed the Mountpath in the deployment file from /test to
src/lib/app/functionchain
Thanks for the concern

Resources