Failed to pull image myapidemodocker.azurecr.io/apidemo:v4.0: rpc error: code = Unknown desc = unknown blob - azure

Any idea why I keep getting this annoying and unhelpful error code/description?
Failed to pull image myapidemodocker.azurecr.io/apidemo:v4.0: rpc error: code = Unknown desc = unknown blob
I thought of incorrect secret and followed this documentation from Microsoft with no success! [https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks][1].
Context:
I am using Visual Studio with Docker for Windows to create Windows
Container image.
Image is pushed to Azure Container Register (ACR) and Deployed as
Azure Container Instance. Unfortunately, I can't use ACI as
production application because it is not connected to a private vNET.
Can't use public IP for security reason but that's what is done just
for poc!
Next step, Created Kubernetes cluster in Azure and trying to deploy
the same image (Windows container) into Kubernetes POD but it is not
working.
Let me share my yml definition and event logs
Here is my yml definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: apidemo
spec:
template:
metadata:
labels:
app: apidemo
spec:
containers:
- name: apidemo
image: myapidemodocker.azurecr.io/apidemo:v4.0
imagePullSecrets:
- name: myapidemosecret
nodeSelector:
beta.kubernetes.io/os: windows
Event logs:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned apidemo-57b5fc58fb-zxk86 to aks-agentp
ool-18170390-1
Normal SuccessfulMountVolume 4m kubelet, aks-agentpool-18170390-1 MountVolume.SetUp succeeded for volume "default-token-gsjhl"
Normal SandboxChanged 2m kubelet, aks-agentpool-18170390-1 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 2m (x2 over 4m) kubelet, aks-agentpool-18170390-1 pulling image "apidemodocker.azurecr.io/apidemo:v4.0"
Warning Failed 20s (x2 over 2m) kubelet, aks-agentpool-18170390-1 Failed to pull image "apidemodocker.azurecr.io/apidemo:v4
.0": [rpc error: code = Unknown desc = unknown blob, rpc error: code = Unknown desc = unknown blob]
Warning Failed 20s (x2 over 2m) kubelet, aks-agentpool-18170390-1 Error: ErrImagePull
Normal BackOff 10s kubelet, aks-agentpool-18170390-1 Back-off pulling image "apidemodocker.azurecr.io/apidemo:
v4.0"
Warning Failed 10s kubelet, aks-agentpool-18170390-1 Error: ImagePullBackOff
(5) I don't understand why Kubernetes is still using /var/run/secrets/kubernetes.io/serviceaccount from default-token-gsjhl as secrete while I specified my own!
Thanks for taking time to provide feedback.

I was able to resolve the issue. It had nothing to do with error message! The actual problem was, I was trying to use Windows Container image and Kubernetes in Azure only support Linux Container images.
This are the actions I had to do:
Configured Ubuntu (Linux Container on Windows 10)
Configured Docker to use Linux (Switch to Linux Container).
Converted ASP.NET MVC project to ASP.NET Core using Visual Studio 2017. This was a big change to support multiple platforms including Linux.
Updated the dockerfile and docker-compose project.
Created new docker image (Linux Container).
Pushed the image to Azure Container Registry.
Created a new deployment in Kubernetes with same credential. It worked!
Created a new Service to expose the app in Kubernetes. This step created an endpoint that client can use.
My Kubernetes cluster is vNET joined and all IP's are private. So, I exposed the Kubernetes endpoint (service) via Azure API Gateway. Just for the sake of demo, I allowed anonymous access to API (API Key and jwt token are must for production app).
Here is the application flow: Client App -> Azure API Gateway -> Kubernetes Endpoint(private IP) -> Kubernetes PODs -> My Linux Container
There are lots of complexities and technology specifications are changing rapidly. So, it took me lots of reading to get it right! I am sure you can do it. Try my API from Azure Kubernetes Service here-
https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetTop10Cities?StateProvince=Texas&CountryRegion=United%20States
https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetAddressById?addressID=581
Here are some the configurations that I used for your information-
Dockerfile:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "gdt.api.demo.dotnetcore.dll"]
Docker-compose:
version: '3'
services:
gdt-api-demo:
image: gdt.api.demo.dotnetcore
build:
context: .\gdt.api.demo.dotnetcore
dockerfile: Dockerfile
Kubernetes Deployment Definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: gdtapidemo
spec:
template:
metadata:
labels:
app: gdtapidemo
spec:
containers:
- name: gdtapidemo
image: gdtapidemodocker.azurecr.io/gdtapidemo-ubuntu:v1.0
imagePullSecrets:
- name: gdtapidemosecret
Kubernetes Service Definition:
kind: Service
apiVersion: v1
metadata:
name: gdtapidemo-service
spec:
selector:
app: gdtapidemo-app
ports:
- protocol: TCP
port: 80
targetPort: 9200
Service as Deployed in Kubernetes

Related

k3d tries to pull Docker image instead of using the local one

Just study the core of K8S on local machine (Linux Mint 20.2).
Created one node cluster locally with:
k3d cluster create mycluster
And now I want to run spring boot application in a container.
I build local image:
library:0.1.0
And here is snippet from Deployment.yml:
spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library'
image: library:0.1.0
imagePullPolicy: IfNotPresent
Despite the fact that image is already built:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
library 0.1.0 254c13416f46 About an hour ago 462MB
Starting the container fails:
pod/library-867dfb64db-vndtj Pulling image "library:0.1.0"
pod/library-867dfb64db-vndtj Failed to pull image "library:0.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "library:0.1.0": failed to resolve reference "library:0.1.0": failed to do request: Head "https://...com/v2/library/manifests/0.1.0": x509: certificate signed by unknown authority
pod/library-867dfb64db-vndtj Error: ErrImagePull
pod/library-867dfb64db-vndtj Error: ImagePullBackOff
pod/library-867dfb64db-vndtj Back-off pulling image "library:0.1.0"
How to resolve local images visibility for k3d cluster?
Solution:
Update the Deployment.yml:
spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library-xp'
image: xpinjection/library:0.1.0
imagePullPolicy: Never
And import the image to cluster:
k3d image import xpinjection/library:0.1.0 -c mycluster
If you don't want to use a docker registry, you have to import the locally built image into the k3d cluster:
k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]
But don't forget to configure in your deployment:
imagePullPolicy: Never

Skaffold cannot pull image from Harbor

Expected behavior
Skaffold should pull the image from insecure Harbor registry running on HTTP. I have tried everything from these docs:
https://skaffold.dev/docs/environment/image-registries/#insecure-image-registries
but without success.
Actual behavior
Jib is pushing image to the insecure Harbor registry without a problem, but error is thrown when trying to pull the image and deploy microservice to Kubernetes:
192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81 can't be pulled.
Specified image can be pulled using docker:
docker pull 192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81
Information
Skaffold version: v1.35.1
Operating system: Windows 10 Home
Installed via: skaffold.dev
Contents of skaffold.yaml:
apiVersion: skaffold/v2beta25
kind: Config
metadata:
name: redis
build:
insecureRegistries:
- 192.168.2.24:30002/trm
- 192.168.2.24:30002/trm/redis-spring
- 192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81
artifacts:
- image: redis-spring
jib:
args:
- -Pjib
- -DsendCredentialsOverHttp=true
tagPolicy:
gitCommit: {}
deploy:
kubectl:
manifests:
- redis-spring-boot.yaml
time="2022-02-02T11:12:40+01:00" level=debug msg="marking resource failed due to error code STATUSCHECK_IMAGE_PULL_ERR" subtask=-1 task=Deploy
- mdm-dev:deployment/redis-spring-boot: container redis-spring is waiting to start: 192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81 can't be pulled
- mdm-dev:pod/redis-spring-boot-68ccfdc688-tj7pp: container redis-spring is waiting to start: 192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81 can't be pulled
- mdm-dev:deployment/redis-spring-boot failed. Error: container redis-spring is waiting to start: 192.168.2.24:30002/trm/redis-spring:latest#sha256:0f8d21819d845bd55aa699afa8b21e141d41f10d9d9fb1a2c6dbb2d468d89e81 can't be pulled.
time="2022-02-02T11:12:40+01:00" level=debug msg="setting skaffold deploy status to STATUSCHECK_IMAGE_PULL_ERR." subtask=-1 task=Deploy```
You need to configure a registry pull secret for your cluster, and then either annotate your pod-specs or your service account to use this registry pull secret.

Azure DevOps Build Agents in Kubernetes

We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]

azure kubernetes service - self signed cert on private registry

I have a tunnel created between my azure subscription and my on-prem servers. ON prem we have an artifactory server that is housing all of our docker images. For all internal servers we have a company wide CA trust and all certs are generated from this.
However, when I try to deploy something to aks and reference this docker registry. I am getting a cert error because the nodes themselves do not trust the "in house" self signed cert.
Is there anyway to get the root CA chain added to the nodes? Or a way to tell the docker daemon on the aks nodes this is an insecure registry?
Not one hundred percent sure, but you can try to use the docker config to create the secret for image pull, the command like this:
cat ~/.docker/config.json | base64
Then create the secret like this:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
Use this secret in your deployment or pod as the value of imagePullSecrets. For more details, see Using a private Docker Registry with Kubernetes.
For the beginning I would recommend you to use curl to check connection between your azure cluster and on prem server.
Please use curl and curl -k and check if they both works(-k allow connections to SSL sites without certs, I assume it won't work, what means You don't have on prem certs on azure cluster)
If curl -k won't work then you need to copy and add certs from on prem to azure cluster.
Links which should help you do that
https://docs.docker.com/ee/enable-client-certificate-authentication/
https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate
And found some informations about doing that with docker daemon
https://docs.docker.com/registry/insecure/
I hope it will help you. Let me know if you have any more questions.
It looks like you are having the same problem described here: https://github.com/kubernetes/kubernetes/issues/43924.
This solution should probably work for you:
As far as I remember this was a docker issue, not a kubernetes one.
Docker does not use linux's ca certs. Nobody knows why.
You have to install those certs manually (on every node that could
spawn those pods) so that docker can use them:
/etc/docker/certs.d/mydomain.com:1234/ca.crt
This is a highly annoying issue as you have to butcher your nodes
after bootstrapping to get those certs in there. And kubernetes spawns
nodes all the time. How this issue has not been solved yet is a
mystery to me. It's a complete showstopper IMO.
Then it's just a question of how to run this for every node. You could do that with a DaemonSet which runs a script from a ConfigMap, as described here: https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets. That article refers to a GitHub project https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial.
The magic is in the DaemonSet.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-initializer
labels:
app: default-init
spec:
selector:
matchLabels:
app: default-init
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: node-initializer
app: default-init
spec:
volumes:
- name: root-mount
hostPath:
path: /
- name: entrypoint
configMap:
name: entrypoint
defaultMode: 0744
initContainers:
- image: ubuntu:18.04
name: node-initializer
command: ["/scripts/entrypoint.sh"]
env:
- name: ROOT_MOUNT_DIR
value: /root
securityContext:
privileged: true
volumeMounts:
- name: root-mount
mountPath: /root
- name: entrypoint
mountPath: /scripts
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
You could modify the script that is in the ConfigMap to pull your cert and put it in the correct directory.

Getting 'didn't match node selector' when running Docker Windows container in Azure AKS

In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.

Resources