I'm working on a project using Helm-kubernetes and azure kubernetes service, in which I'm trying to use a simple node image which I have been pushed on azure container registry inside my helm chart but it returns ImagePullBackOff error.
Here are some details:
My Dockerfile:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
My helm_chart/values.yaml:
replicaCount: 1
image:
registry: helmcr.azurecr.io
repository: helloworldtest
tag: 0.7
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: LoadBalancer
port: 32000
internalPort: 32000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- name: mychart.local
path: /
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
When I try to pull the image directly uasing the command below as:
docker pull helmcr.azurecr.io/helloworldtest:0.7
then it pulls the image successfully.
Whats can be wrong here?
Thanks in advance!
Your kubernetes cluster needs to be authenticated to the container registry to pull images, generally this is done by a docker secret:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
If you are using AKS, you can grant cluster application id pull rights to the registry, that is enough.
Reading: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Related
My application consists of a UI built in react, an API, MQTT broker, and a webhook to monitor changes in a database all built with node.
The database has not yet been made into a volume, but is running on my local computer.
Here is my deployment.yml file
`
# defining Service
apiVersion: v1
kind: Service
metadata:
name: factoryforge
spec:
selector:
app: factoryforge
ports:
- port: 80
name: api
targetPort: 3000
- port: 81
name: mqtt
targetPort: 3001
- port: 82
name: dbmonitor
targetPort: 3002
- port: 83
name: ui
targetPort: 3003
type: LoadBalancer
---
#Defining multi container pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: factoryforge
spec:
replicas: 1
selector:
matchLabels:
app: factoryforge
template:
metadata:
labels:
app: factoryforge
spec:
containers:
- name: api
image: blueridgetest/api
ports:
- containerPort: 3000
- name: mqtt
image: blueridgetest/mqtt
ports:
- containerPort: 3001
- name: dbmonitor
image: blueridgetest/dbmonitor
ports:
- containerPort: 3002
- name: ui
image: blueridgetest/ui
ports:
- containerPort: 3003
`
...and the dockerbuild files for the four services
UI
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i --f
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3000
CMD [ "npm", "start" ]
`
API
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3002
EXPOSE 8000
CMD [ "node", "API.js" ]
`
MQTT
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 8884
CMD [ "node", "MQTTBroker.js" ]
`
DBMonitor
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3001
CMD [ "node", "index.js" ]
`
Any help would be greatly appreciated. Thanks!
i would say that in a standard Kubernetes workload you should start four different Pods with four different deployments. Then create four different services and you will see that they can communicate with one another.
With this, your pods will be simpler to check, and you are ready to scale the pods singularly.
Maybe I missed something?
I created an AKS cluster with http enabled.Also I have my project with dev spaces enabled to use the cluster.While runing azds up the app is creating all necessary deployment files (helm.yaml,charts.yaml,values.yaml).However I want to access my app using a public endpoint with dev space url but when I do azds list-uris it is only giving localhost url and not the url with dev space enabled.
Can anyone please help?
My azds.yaml looks like below
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
# Optionally, specify an array of imagePullSecrets. These secrets must be manually created in the namespace.
# This will override the imagePullSecrets array in values.yaml file.
# If the dockerfile specifies any private registry, the imagePullSecret for that registry must be added here.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# For example, the following uses credentials from secret "myRegistryKeySecretName".
#
# imagePullSecrets:
# - name: myRegistryKeySecretName
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to form the service's public URL: [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg, webfrontend]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
I followed below guide
https://microsoft.github.io/AzureTipsAndTricks/blog/tip228.html
AZDS up is giving end point to my localhost
Service 'webfrontend' port 80 (http) is available via port forwarding at http://localhost:50597
Has your azds.yaml file ingress definition to the public 'webfrontend' domain?
Here is an example azds.yaml file created using .NET Core sample application:
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
More about it: https://learn.microsoft.com/pl-pl/azure/dev-spaces/how-dev-spaces-works-prep
How many service logs do you see in 'azds up' log, are you watching something similar to:
Service 'webfrontend' port 'http' is available at `http://webfrontend.XXX
Did you follow this guide?
https://learn.microsoft.com/pl-pl/azure/dev-spaces/troubleshooting#dns-name-resolution-fails-for-a-public-url-associated-with-a-dev-spaces-service
Do you have the latest version of the azds?
I written a Nodejs service , and build it by docker . Then i pushed it into Azure Container Registry .
I used Helm to pull Repository from ACR and then deploy to AKS but service not run .
Please tell me some advise.
The code of Helm Value . I thing i have to setting type and port of service.
replicaCount: 1
image:
repository: tungthtestcontainer.azurecr.io/demonode
tag: latest
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: NodePort
port: 8082
internalPort: 8082
ingress:
enabled: false
annotations: {}
hosts:
- host: chart-example.local
paths: []
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
To figure out what happens in that situations it doesn't matter that is helm or a yaml directly with kubectl apply o if it's Azure or another provider I recommend you follow the next steps:
Check the status of the release on helm you can see the status every time you want using helm status <release-name>, try to see if the pots are correctly created and the services are also ok.
Check the deployment with kubectl describe deployment <deployment-name>
Check the pod with kubectl describe pod <pod-name>
Check the pod logs with kubectl logs -f <pod-name>
With that, you should be able to find the source problem.
I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.
I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.
So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.
I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.
Here is my Dockerfile:
#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local
RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts # for old "service"
ENV container docker
RUN yum install -y bind bind-utils
RUN systemctl enable named.service
# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service
# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp
# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ]
VOLUME ["/sys/fs/cgroup"]
RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder
RUN sh ./setup.sh
WORKDIR /
EXPOSE 8080
CMD ["/bin/startServices.sh"]
Here is my Docker-Compose.yml
version: '3'
services:
myapp:
build: ./myappfolder
container_name: myapp
environment:
- container=docker
ports:
- "8080:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
command: "bash -c /usr/sbin/init"
Here is my Kubectl yml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- args:
- bash
- -c
- /usr/sbin/init
env:
- name: container
value: docker
name: myapp
image: myapp.azurecr.io/newinstalled_app:v1
ports:
- containerPort: 8080
args: ["--allow-privileged=true"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
#command: ["bash", "-c", "/usr/sbin/init"]
imagePullSecrets:
- name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp
I used these commands -
1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash
entered into pod - its 404.
11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.
But my docker-compose up -d -> goes into our application.
Am I missing anything?
Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.
That's the only change I made and things started working fine.
Thanks!
I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.
I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.
In Google Cloud Console:
git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .
# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1
Server is started
^CServer is stopped
# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1
# Deployment
kubectl create -f grpc-diagnostic.yaml
In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:
File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'
Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine?
Thanks.
requirements.txt
grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos
Dockerfile
FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH
ADD . /diagnostic/
WORKDIR /diagnostic
RUN pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]
grpc-diagnostic.yaml
apiVersion: v1
kind: Service
metadata:
name: esp-grpc-diagnostic
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000 # or 8000?
protocol: TCP
name: http2
selector:
app: esp-grpc-diagnostic
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-diagnostic
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-diagnostic
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=diagnostic.endpoints.project-id.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:8000"
]
ports:
- containerPort: 9000
- name: diagnostic
image: gcr.io/project-id/python-grpc-diagnostic-server:v1
ports:
- containerPort: 8000
That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed.
The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in documentation. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.
Right way:
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 .
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01
Another way pulling updated images with no name changed is changing imagePullPolicy that is set to IfNotPresent by default. more info