I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.
I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.
In Google Cloud Console:
git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .
# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1
Server is started
^CServer is stopped
# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1
# Deployment
kubectl create -f grpc-diagnostic.yaml
In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:
File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'
Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine?
Thanks.
requirements.txt
grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos
Dockerfile
FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH
ADD . /diagnostic/
WORKDIR /diagnostic
RUN pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]
grpc-diagnostic.yaml
apiVersion: v1
kind: Service
metadata:
name: esp-grpc-diagnostic
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000 # or 8000?
protocol: TCP
name: http2
selector:
app: esp-grpc-diagnostic
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-diagnostic
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-diagnostic
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=diagnostic.endpoints.project-id.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:8000"
]
ports:
- containerPort: 9000
- name: diagnostic
image: gcr.io/project-id/python-grpc-diagnostic-server:v1
ports:
- containerPort: 8000
That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed.
The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in documentation. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.
Right way:
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 .
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01
Another way pulling updated images with no name changed is changing imagePullPolicy that is set to IfNotPresent by default. more info
Related
I have multiple Testcafe scripts (script1.js, script2.js) that are working fine. I have Dockerized this code into a Dockerfile and it works fine when I run the Docker Image. Next, I want to invoke this Docker Image as a CronJob in Kubernetes. Given below is my manifest.yaml file.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: application-automation-framework
namespace: development
labels:
team: development
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
labels:
team: development
spec:
ttlSecondsAfterFinished: 120
backoffLimit: 3
template:
metadata:
labels:
team: development
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js"]
- name: script2-job
image: testcafe-minikube
imagePullPolicy: Never
args: [ "chromium:headless", "script2.js"]
restartPolicy: OnFailure
As seen above, this manifest has two containers running. When I apply this manifest to Kubernetes, the first container (script1-job), runs well. But the second container (script2-job) gives me the following error.
ERROR The specified 1337 port is already in use by another program.
If I run this with one container, it works perfectly. I also tried changing the args of the containers to the following.
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
args: ["chromium:headless", "script2.js", "--ports 1234,1235"]
Still, I get the same error saying 1337 port already in use. (I wonder whether the --ports argument is working at all in Docker).
This is my Dockerfile for reference.
FROM testcafe/testcafe
COPY . ./
USER root
RUN npm install
Could someone please help me with this? I want to run multiple containers as Cronjobs in Kubernetes, where I can run multiple Testcafe scripts in each job invocation?
adding the containerPort configuration to your kubernetes resource should do the trick.
for example:
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
ports:
- containerPort: 12346
I've deployed a pod in AKS and I'm trying to connect to it via an external load balancer.
The items I done for troubleshooting are:
Verified (using kubectl) pod deployed in k8s and is running properly.
Verified (using netstat) Network port 80 is in ‘listening’. I logged into the pod using 'kubectl exec'
The .yaml file I used to deploy is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: qubo
namespace: qubo-gpu
spec:
replicas: 1
selector:
matchLabels:
app: qubo
template:
metadata:
labels:
app: qubo
spec:
containers:
- name: qubo-ctr
image: <Blanked out>
resources:
limits:
nvidia.com/gpu: 1
command: ["/app/xqx"]
args: ["80"]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: api
namespace: qubo-gpu
annotations:
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: qubo
Turned out to be my bug in the code of how I opened the socket. In hopes this will help someone else, this is how I went about troubleshooting:
Got IP for pod:
kubectl get pods -o wide
Created a new ubuntu pod in cluster:
kubectl run -it --rm --restart=Never --image=ubuntu:18.04 ubuntu bash
Downloaded curl to new pod:
apt-get update && apt-get install -y curl
Tried to curl to the pod IP (from step 1):
curl -v -m5 http://<Pod IP>:80
Step 4 failed for me, however, I was able to run the docker container successfully on my machine and connect. Issue was that I opened the connection as localhost instead of 0.0.0.0.
I have kubernetes running on OVH without a problem. But i recently reinstalled the build server because of other issues and setup everything but when trying to apply files it gives this horrable error.. did i miss something? and what does this error really mean?
+ kubectl apply -f k8s
unable to recognize "k8s/driver-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-mysql-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-persistent-volume-claim.yaml": no matches for kind "PersistentVolumeClaim" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried all previous answes on SO but none worked out for me. I don't think that i really need it, "correct me if i am wrong on that". I really would like to get some help with this.
I have installed kubectl and i got a config file that i use.
And when i execute the kubectl get pods command i am getting the pods that where deployed before
These are some of the yml files
k8s/driver-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: driver-cluster-ip-service
spec:
type: ClusterIP
selector:
component: driver-service
ports:
- port: 3000
targetPort: 8080
k8s/driver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: driver-deployment
spec:
replicas: 1
selector:
matchLabels:
component: driver-service
template:
metadata:
labels:
component: driver-service
spec:
containers:
- name: driver
image: repo.taxi.com/driver-service
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: taxiregistry
dockerfile
FROM maven:3.6.0-jdk-8-slim AS build
COPY . /home/app/
RUN rm /home/app/controllers/src/main/resources/application.properties
RUN mv /home/app/controllers/src/main/resources/application-kubernetes.properties /home/app/controllers/src/main/resources/application.properties
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:8-jre-slim
COPY --from=build /home/app/controllers/target/controllers-1.0.jar /usr/local/lib/driver-1.0.0.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/driver-1.0.0.jar"]
kubectl get pods command
kubectl api-versions
solution found
I had to place the binary file in a .kube folder which should be placed in the root directory
In my case i had to manually create the .kube folder in the root directory first.
After that I had my env variable set to that folder and placed my config file with my settings in there as well
Then i had to share the folder with the jenkins user and applied rights to the jenkins group
Jenkins was not up to date, so I had to restart the jenkins server.
And it worked like a charm!
Keep in mind to restart the jenkins server so that the changes you make will take affect on jenkins.
I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.
I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.
So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.
I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.
Here is my Dockerfile:
#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local
RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts # for old "service"
ENV container docker
RUN yum install -y bind bind-utils
RUN systemctl enable named.service
# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service
# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp
# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ]
VOLUME ["/sys/fs/cgroup"]
RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder
RUN sh ./setup.sh
WORKDIR /
EXPOSE 8080
CMD ["/bin/startServices.sh"]
Here is my Docker-Compose.yml
version: '3'
services:
myapp:
build: ./myappfolder
container_name: myapp
environment:
- container=docker
ports:
- "8080:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
command: "bash -c /usr/sbin/init"
Here is my Kubectl yml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- args:
- bash
- -c
- /usr/sbin/init
env:
- name: container
value: docker
name: myapp
image: myapp.azurecr.io/newinstalled_app:v1
ports:
- containerPort: 8080
args: ["--allow-privileged=true"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
#command: ["bash", "-c", "/usr/sbin/init"]
imagePullSecrets:
- name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp
I used these commands -
1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash
entered into pod - its 404.
11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.
But my docker-compose up -d -> goes into our application.
Am I missing anything?
Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.
That's the only change I made and things started working fine.
Thanks!
I am newbie to node.js & angular and I have simple Angular app build Angular 5 with some simple routes.
I also want to support server side rendering in my app with Angular Universal and host my app on Google Cloud App Engine.
I tried to upload a starter kit of angular universal on App Engine it fails. (https://github.com/gdi2290/angular-starter).I have deployed it using docker. Although deploy is successful but it gives 502 Bad Gateway error for nginx. I have tried clearing cache and all other suggestion avaliable on net. But still same result.
I have also tried example from Google: https://codelabs.developers.google.com/codelabs/cloud-cardboard-viewer/index.html?index=..%2F..%2Findex worked but it is basic.
Please help me create a App Engine deploy-able version of code https://github.com/gdi2290/angular-starter.
Before I go into any detail, let me give you the Github link of my Angular Universal seed project with Dockerfile and Sass. I use it as a starting point for my projects. Since I am a fan of Vagrant, you will find the Vagranfile in the repository and use it to create the exact same environment for development as well as testing the Docker container. The Readme file provides all the details as to how to work with the project.
Here is the link.
Angular Universal Project Creation
The Angular Universal setup steps are detailed here in the official documentation.
However, I had a few wasted hours to find out the following point
Webpack 3 is not compatible with ts-loader versions higher than
3.5.0. At the time of developing this, the latest version of Angular CLI is 1.7.2 which uses Webpack 3.*. Hence, while setting up Angular
Universal, install ts-config#3.5.0
Dockerfile
My Dockerfile looks like below. So, as you can see, I am using the docker feature multi stage build to first build the project in a container, copying the distribution to a new container and discarding the old container used for build. This allows Google Cloud build trigger to build the source code and create the Docker image from the distribution.
FROM node:8.10.0 AS ausbuilder
RUN useradd --create-home --shell /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY aus/ /home/aus
RUN mkdir /home/aus/.npm; \
npm config set prefix /home/aus/.npm; \
npm install --quiet --no-progress -g webpack#3.11.0; \
npm install --quiet --no-progress -g #angular/cli#1.7.2; \
npm install --quiet --no-progress;
ENV PATH=/home/aus/.npm/bin:$PATH
RUN npm run build; \
webpack --config webpack.server.config.js --no-progress
FROM keymetrics/pm2:8-alpine
RUN adduser -h /home/aus -s /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY --from=ausbuilder /home/aus/dist /home/aus/dist
EXPOSE 4000/tcp
ENTRYPOINT ["pm2-runtime","start","/home/aus/dist/server.js"]
Deployment in Kubernetes in Google Cloud
We need to first create a build trigger in Google Cloud, so that as soon as we push the code in (let's say) the master branch, the code build and subsequent deployment is triggered. Your code may be hosted in Google Cloud source control, Bitbucket or Github. You need to integrate your source control with the build trigger.While creating the build trigger, you will have option to select either Dockerfile or cloudbuild.yaml, if you chose the first option, your code will be built and subsequently the Docker image will be stored in the Google Container Repository. I go for the second option as it allows me to do more like deployment in Kubernetes.
Here is how my cloudbuild.yaml looks like.
A few important points to note here:
While cloning the repository I cannot give any external URL. So, what happens here is when you create a build trigger, google creates another repository in the Google domain which basically points to the external source control like Bitbucket in my case. You can find this in the Google Source Control section.
Secondly, I am creating a tag latest for the container image so that I can refer it in the Kubernetes deployment descriptor which I named as kubedeployment.yaml. kubedeployment.yaml is referenced in the cloudbuild.yaml as you can see below
steps:
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://source.developers.google.com/p/aus2018/r/bitbucket-saptarshibasu-aus']
- name: 'gcr.io/cloud-builders/docker'
args: ["build", "-t", "gcr.io/aus2018/aus:$REVISION_ID", "."]
- name: 'gcr.io/cloud-builders/docker'
args: ["tag", "gcr.io/aus2018/aus:$REVISION_ID", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:$REVISION_ID"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/kubectl'
args:
- 'create'
- '-f'
- 'kubedeployment.yaml'
env:
- 'CLOUDSDK_COMPUTE_ZONE=asia-south1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=aus'
Finally, here is how the kubedeployment.yaml looks:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aus-deploy
spec:
replicas: 1
selector:
matchLabels:
app: aus
template:
metadata:
labels:
app: aus
spec:
containers:
- name: aus
image: gcr.io/aus2018/aus:latest
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: aus-svc
labels:
app: aus
spec:
type: NodePort
selector:
app: aus
ports:
- protocol: TCP
port: 80
targetPort: 4000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aus-ing
spec:
backend:
serviceName: aus-svc
servicePort: 80
Once deployment is complete after a few minutes, you'll get the Ingress URL. And then after a few minutes you app starts showing up at the URL.
You are definitely going to customise this to fit your needs. However, I hope, this would probably give you a starting point.