This question already has answers here:
Pull image Azure Container Registry - Kubernetes
(2 answers)
Kubernetes pull from multiple private docker registries
(1 answer)
Can anyone please guide how to pull private images from Kubernetes?
(2 answers)
Closed 2 years ago.
I've k8s deployment yaml which I need to pull image
from private registry
where should I put the
host
user
password
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
I found this but it doesnt really helps
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
The doc contains detailed about pulling secrets from private registry.
Summary is,
Create secret using following command
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Than specify the secret name in your deployment file by adding following line
imagePullSecrets:
- name: regcred
So, Create a secret and modify your deployment like
apiVersion: apps/v1
kind: Deployment
metadata:
name: tra
namespace: ba
spec:
replicas: 1
selector:
matchLabels:
app: tra
template:
metadata:
labels:
app: tra
spec:
containers:
- name: tra
image: de/sec:0.0.10
imagePullPolicy: Always
ports:
- containerPort: 5000
imagePullSecrets:
- name: regcred
If you want to create secret from file then update following file into secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: <namespace>
data:
.dockerconfigjson: < add here output of cat ~/.docker/config.json| base64 -w 0 >
type: kubernetes.io/dockerconfigjson
then run kubectl apply -f secret.yaml
Related
I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP
I'm new in Kubernetes and I was tring to deploy a nodejs service to kubernetes. For that I created a docker image and upload it to dockerhub and finally I created a deployment file that contains all required configurations in order to accomplish the deployment.
The deployment file is shown above. I then executed the command 'kubectl apply -f deployment_local.yaml' and I came across with this error: "*spec.template.metadata.labels:Invalid value map[string]string{"app":"nodejs\u00a0\u00a0"}:selector does not match template labels"
I'm tring to fix this bug but I could not fix it. Pls help understand this error because I'm strugglying for a lot of time.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
As the error message was trying to tell you, there are two "non-breaking space" characters after nodejs: map[string]string{"app":"nodejs\u00a0\u00a0"}
I would guess it was a side-effect of copy-pasting from a webpage
If you even do a "select all" on your posted question here, you'll see that SO has converted the two characters into normal spaces, but they do show up in the selection extension past the "nodejs" text
If your editor is not able to show you the characters, then either manually retype the labels, or try copying this (which is just yours but with trailing spaces removed)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
so in Docker, I can do a Docker run -e to pass in environment variables.
But how does one do that for Azure Kubernetes Pods? They aren't username/password kinds of variables but more so URLs segments we would want to use during startup.
http://webapi/august where august is what we would want to pass in, then in September, we would want to pass in september.
This aren't the best examples, but it shows what I'm looking for.
Thanks.
There is a clear example on kubernetes documentation for this - https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Short example from there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Take a note of env
later if you want to change the variable on the fly - you can use kubectl set env -h command
I have a private Azure Container registry that contains two containers, a windows based (mcr.microsoft.com/dotnet/core/samples:aspnetapp) and a linux based (a custom test). I created a secret etc. which seems ok. When I try to deploy those with kubernetes the following happens:
The linux based from the private repo starts normally
The windows based container from docker hub starts normally
The SAME windows based container from the private repo throws an error : Back-off pulling image "spintheblackcircleshop.azurecr.io/aspnetapp"
Anyone?
-
test.yaml:
apiVersion: v1
items:
# basplus deployment
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private
spec:
replicas: 1
template:
metadata:
labels:
app: private
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-public
spec:
replicas: 1
template:
metadata:
labels:
app: public
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private-sleep
spec:
replicas: 1
template:
metadata:
labels:
app: private-sleep
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/danielm-test-sleep
imagePullSecrets:
- name: mysecret
# end
kind: List
metadata: {}
AKS doesnt support windows nodes yet. There is no way to run windows containers in AKS at the time of writing (05/05/2019).
edit: fair point raised by the other answer. you actually can run windows containers in aci in aks, but it's not exactly in aks :)
Well, AKS does not support windows node currently, but you can just run windows container in it when you install the virtual kubelet in the AKS. It takes advantage of the ACI.
See the steps that install the virtual kubelet and run windows container in the document Use Virtual Kubelet with Azure Kubernetes Service (AKS).
I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!