Not able to create Azure Container Instance with CLI using private image - azure

I can't deploy pod using private image (ACR) using CLI and yaml file.
Deploying from registry directly using either az container or kubectl run does work however.
Pod status:
"containers": [
{
"count": 3,
"firstTimestamp": "2017-08-26T07:31:36+00:00",
"lastTimestamp": "2017-08-26T07:32:20+00:00",
"message": "Failed: Failed to pull image \"ucont01.azurecr.io/unreal-deb\": rpc error: code 2 desc Error: im age unreal-deb:latest not found",
"type": "Warning"
},
],
},
Yaml file:
apiVersion: v1
kind: Pod
metadata:
generateName: "game-"
namespace: default
spec:
nodeName: aci-connector
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: unreal-dev-server
image: ucont01.azurecr.io/unreal-deb
imagePullPolicy: Always
ports:
- containerPort: 7777
protocol: UDP
imagePullSecrets:
- name: registrykey

Unfortunately the aci-connector-k8s doesn't currently support images from private repositories. There is an issue open to add support but it's not currently implemented.
https://github.com/Azure/aci-connector-k8s/issues/35

According to your description, could you please check your repositories via Azure portal, like this:
Use your YAML, it work for me:
apiVersion: v1
kind: Pod
metadata:
generateName: "game-"
namespace: default
spec:
nodeName: k8s-agent-379980cb-0
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: unreal-dev-server
image: jasontest.azurecr.io/samples/nginx
imagePullPolicy: Always
ports:
- containerPort: 7777
protocol: TCP
imagePullSecrets:
- name: secret1
Here is the screenshot:
Here is my secret:
jason#k8s-master-379980CB-0:~$ kubectl get secret
NAME TYPE DATA AGE
default-token-865dj kubernetes.io/service-account-token 3 1h
secret1 kubernetes.io/dockercfg 1 47m

If the credentials (corresponding to registrykey) are incorrect, you may get 'image not found' error, though the image exists. you may want to verify the registrykey credentials again..

Related

Kubernetes Crashloopbackoff With Minikube

So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

AKS Azure DevOps Build Agent

I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.
You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/

Typescript "error: no pg_hba.conf entry for host "x", user "x", database "x", SSL off"

Setup
I'm currently using Kubernetes to manage my NodeJS services.
Every NodeJS service has its own PostgreSQL database, I'm using TypeORM to access each database. Everything worked fine until I converted my Kubernetes Deployment to a StatefulSet. I did this because I wanted my databases to keep their data, even after being shut down.
Problem
The NodeJS service (confirmation-deployment-{unique-id}) which represents a REST API can't connect to the PostgreSQL database (confirmation-postgres-statefulset-{number})
Error
The logs return this error: error: no pg_hba.conf entry for host "x", user "x", database "x", SSL off
I found that I had to set {ssl: true} in my ConnectionOptions but when
I did this I got another error: Error: The server does not support SSL connections.
I'm basically stuck at the moment. The first error tells me to convert {ssl: false} to {ssl: true}, while the other error tells me to do the opposite. I've no idea why this happens, all of this started when I converted the Deployment to a StatefulSet inside Kubernetes.
If I can take a guess, it's perhaps an internal cluster network issue inside Kubernetes? Anyway, I'm not familiar with those errors...
Any help would be appreciated!
System Information
Windows 10
Docker Desktop
Linux Containers
Code
ormconfig.json: TypeORM's way to define the ConnectionOptions.
{
"type": "postgres",
"host": "confirmation-postgres-service",
"port": 5432,
"username": "postgres",
"password": "12345",
"database": "postgres",
"synchronize": true,
"ssl": false,
"entities": [
"src/models/*.model.ts"
]
}
confirmation-postgres-statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: confirmation-postgres-statefulset
spec:
serviceName: "confirmation-postgres"
replicas: 2
selector:
matchLabels:
app: confirmation-postgres
template:
metadata:
labels:
app: confirmation-postgres
spec:
containers:
- name: confirmation-postgres
image: postgres
envFrom:
- configMapRef:
name: confirmation-postgres-config
ports:
- containerPort: 5432
name: confirmation-db
volumeMounts:
- name: confirmation-postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: confirmation-postgres-storage
persistentVolumeClaim:
claimName: confirmation-postgres-pvc
confirmation-postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: confirmation-postgres-service
spec:
type: ClusterIP
selector:
app: confirmation-postgres
ports:
- name: db
protocol: TCP
port: 5432
targetPort: 5432
confirmation-postgres-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: confirmation-postgres-config
labels:
app: confirmation-postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: "12345"
confirmation-postgres-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: confirmation-postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
confirmation-postgres-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: confirmation-postgres-pvc
labels:
app: confirmation-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
If you want to use ssl, you need to configure/compiler your PostgreSQL server to support it.
If you don't want to use ssl, you need to configure your pg_hba.conf to allow connections (from that host, user, and database) without ssl. Most likely your pg_hba.conf just doesn't allow this connection either way (either with or without ssl), but the client is only reporting on the one it actually attempted. I say this because if you pg_hba demands ssl, but your server doesn't support it, then it should refuse to start in the first place. And if it didn't start, you would be getting a different error.

How to mount a volume with a windows container in kubernetes?

i'm trying to mount a persistent volume into my windows container, but i alwys get this error:
Unable to mount volumes for pod "mssql-with-pv-deployment-3263067711-xw3mx_default(....)": timeout expired waiting for volumes to attach/mount for pod "default"/"mssql-with-pv-deployment-3263067711-xw3mx". list of unattached/unmounted volumes=[blobdisk01]
i've created a github gist with the console output of "get events" and "describe sc | pvc | po" maybe someone will find the solution with it.
Below are my scripts that I'm using for deployment.
my storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-disk-sc
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
my PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-disk-pvc
spec:
storageClassName: azure-disk-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
and the deployment of my container:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-with-pv-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql-with-pv
spec:
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 10
containers:
- name: mssql-with-pv
image: testacr.azurecr.io/sql/mssql-server-windows-developer
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: "c:/volume"
name: blobdisk01
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: azure-disk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mssql-with-pv-deployment
spec:
selector:
app: mssql-with-pv
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
what am i doing wrong? is there another way to mount a volume?
thank for every help :)
I would try:
Change API version to v1: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk
kubectl get events to see you if have a more detailed error (I could figure out the reason when I used NFS watching events)
maybe is this bug, I read in this post?
You will need a new volume in D: drive, looks like folders in C: are not supported for Windows Containers, see here:
https://github.com/kubernetes/kubernetes/issues/65060
Demos:
https://github.com/andyzhangx/demo/tree/master/windows/azuredisk

Pull image Azure Container Registry - Kubernetes

Does anyone have any advice on how to pull from Azure container registry whilst running within Azure container service (kubernetes)
I've tried a sample deployment like the following but the image pull is failing:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
I got this working after reading this info.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
So firstly create the registry access key
kubectl create secret docker-registry myregistrykey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
Replacing the server address with the address of your ACR address and the USERNAME, PASSWORD and EMAIL address with the values from the admin user for your ACR. Note: The email address can be value.
Then in the deploy you simply tell kubernetes to use that key for pulling the image like so:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
imagePullSecrets:
- name: myregistrykey
This is something we've actually made easier. When you provision a Kubernetes cluster through the Azure CLI, a service principal is created with contributor privileges. This will enable pull requests of any Azure Container Registry in the subscription.
There was a PR: https://github.com/kubernetes/kubernetes/pull/40142 that was merged into new deployments of Kubernetes. It won't work on existing kubernetes instances.

Resources