Angular 5 app with server rendering with Angular Universal on App Engine - node.js

I am newbie to node.js & angular and I have simple Angular app build Angular 5 with some simple routes.
I also want to support server side rendering in my app with Angular Universal and host my app on Google Cloud App Engine.
I tried to upload a starter kit of angular universal on App Engine it fails. (https://github.com/gdi2290/angular-starter).I have deployed it using docker. Although deploy is successful but it gives 502 Bad Gateway error for nginx. I have tried clearing cache and all other suggestion avaliable on net. But still same result.
I have also tried example from Google: https://codelabs.developers.google.com/codelabs/cloud-cardboard-viewer/index.html?index=..%2F..%2Findex worked but it is basic.
Please help me create a App Engine deploy-able version of code https://github.com/gdi2290/angular-starter.

Before I go into any detail, let me give you the Github link of my Angular Universal seed project with Dockerfile and Sass. I use it as a starting point for my projects. Since I am a fan of Vagrant, you will find the Vagranfile in the repository and use it to create the exact same environment for development as well as testing the Docker container. The Readme file provides all the details as to how to work with the project.
Here is the link.
Angular Universal Project Creation
The Angular Universal setup steps are detailed here in the official documentation.
However, I had a few wasted hours to find out the following point
Webpack 3 is not compatible with ts-loader versions higher than
3.5.0. At the time of developing this, the latest version of Angular CLI is 1.7.2 which uses Webpack 3.*. Hence, while setting up Angular
Universal, install ts-config#3.5.0
Dockerfile
My Dockerfile looks like below. So, as you can see, I am using the docker feature multi stage build to first build the project in a container, copying the distribution to a new container and discarding the old container used for build. This allows Google Cloud build trigger to build the source code and create the Docker image from the distribution.
FROM node:8.10.0 AS ausbuilder
RUN useradd --create-home --shell /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY aus/ /home/aus
RUN mkdir /home/aus/.npm; \
npm config set prefix /home/aus/.npm; \
npm install --quiet --no-progress -g webpack#3.11.0; \
npm install --quiet --no-progress -g #angular/cli#1.7.2; \
npm install --quiet --no-progress;
ENV PATH=/home/aus/.npm/bin:$PATH
RUN npm run build; \
webpack --config webpack.server.config.js --no-progress
FROM keymetrics/pm2:8-alpine
RUN adduser -h /home/aus -s /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY --from=ausbuilder /home/aus/dist /home/aus/dist
EXPOSE 4000/tcp
ENTRYPOINT ["pm2-runtime","start","/home/aus/dist/server.js"]
Deployment in Kubernetes in Google Cloud
We need to first create a build trigger in Google Cloud, so that as soon as we push the code in (let's say) the master branch, the code build and subsequent deployment is triggered. Your code may be hosted in Google Cloud source control, Bitbucket or Github. You need to integrate your source control with the build trigger.While creating the build trigger, you will have option to select either Dockerfile or cloudbuild.yaml, if you chose the first option, your code will be built and subsequently the Docker image will be stored in the Google Container Repository. I go for the second option as it allows me to do more like deployment in Kubernetes.
Here is how my cloudbuild.yaml looks like.
A few important points to note here:
While cloning the repository I cannot give any external URL. So, what happens here is when you create a build trigger, google creates another repository in the Google domain which basically points to the external source control like Bitbucket in my case. You can find this in the Google Source Control section.
Secondly, I am creating a tag latest for the container image so that I can refer it in the Kubernetes deployment descriptor which I named as kubedeployment.yaml. kubedeployment.yaml is referenced in the cloudbuild.yaml as you can see below
steps:
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://source.developers.google.com/p/aus2018/r/bitbucket-saptarshibasu-aus']
- name: 'gcr.io/cloud-builders/docker'
args: ["build", "-t", "gcr.io/aus2018/aus:$REVISION_ID", "."]
- name: 'gcr.io/cloud-builders/docker'
args: ["tag", "gcr.io/aus2018/aus:$REVISION_ID", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:$REVISION_ID"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/kubectl'
args:
- 'create'
- '-f'
- 'kubedeployment.yaml'
env:
- 'CLOUDSDK_COMPUTE_ZONE=asia-south1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=aus'
Finally, here is how the kubedeployment.yaml looks:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aus-deploy
spec:
replicas: 1
selector:
matchLabels:
app: aus
template:
metadata:
labels:
app: aus
spec:
containers:
- name: aus
image: gcr.io/aus2018/aus:latest
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: aus-svc
labels:
app: aus
spec:
type: NodePort
selector:
app: aus
ports:
- protocol: TCP
port: 80
targetPort: 4000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aus-ing
spec:
backend:
serviceName: aus-svc
servicePort: 80
Once deployment is complete after a few minutes, you'll get the Ingress URL. And then after a few minutes you app starts showing up at the URL.
You are definitely going to customise this to fit your needs. However, I hope, this would probably give you a starting point.

Related

Prisma Query engine library for current platform "debian-openssl-1.1.x" could not be found

I have a NodeJS/NestJS project consisting of multiple microservices. I've deployed my postgres database and also a microservice pod which interact with the database, on an aws kubernetes cluster. I'm using Prisma as ORM and when I exec into pod and run
npx prisma generate
the output is as below:
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
✔ Generated Prisma Client (4.6.1 | library) to ./node_modules/#prisma/client in 1.32s
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
but when I call an API to create an object in my postgres db by the prisma ORM, I get the error below in the microservice pod:
error: PrismaClientInitializationError:
Invalid `prisma.session.create()` invocation:
Query engine library for current platform "debian-openssl-1.1.x" could not be found.
You incorrectly pinned it to debian-openssl-1.1.x
This probably happens, because you built Prisma Client on a different platform.
(Prisma Client looked in "/usr/src/app/node_modules/#prisma/client/runtime/libquery_engine-debian-openssl-1.1.x.so.node")
Searched Locations:
/usr/src/app/node_modules/.prisma/client
C:\Users\MOHSEN\Desktop\cc-g\cc-gateway\cc-gateway\db-manager\node_modules\#prisma\client
/usr/src/app/node_modules/#prisma/client
/usr/src/app/node_modules/.prisma/client
/usr/src/app/node_modules/.prisma/client
/tmp/prisma-engines
/usr/src/app/node_modules/.prisma/client
To solve this problem, add the platform "debian-openssl-1.1.x" to the "binaryTargets" attribute in the "generator" block in the "schema.prisma" file:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native"]
}
Then run "prisma generate" for your changes to take effect.
Read more about deploying Prisma Client: https://pris.ly/d/client-generator
at RequestHandler.handleRequestError (/usr/src/app/node_modules/#prisma/client/runtime/index.js:34316:13)
at /usr/src/app/node_modules/#prisma/client/runtime/index.js:34737:25
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async PrismaService._executeRequest (/usr/src/app/node_modules/#prisma/client/runtime/index.js:35301:22)
at async PrismaService._request (/usr/src/app/node_modules/#prisma/client/runtime/index.js:35273:16)
at async AppService.createSession (/usr/src/app/dist/app.service.js:28:28) {
clientVersion: '4.6.1',
errorCode: undefined
}
Also this is generator client in schema.prisma file:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl", "debian-openssl-1.1.x"]
}
Before that I had the same problem but the error was mentioning about "linux-musl" like below:
Query engine library for current platform "linux-musl" could not be found.
although I was using linux-musl in the binary target in generator block.
but after lots of research I found that I should not use alpine node in my docker file and instead I used buster and my docker file is as below:
FROM node:buster As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build db-manager
FROM node:buster as production
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
I think the problem is that, the prisma query engine could not be found because it is searching wrong locations for platform specific query engine. So, I tried to provide the locations that query engine files are located in my pod, as ENV variables in the deployment file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-manager
spec:
replicas: 3
selector:
matchLabels:
app: db-manager
template:
metadata:
labels:
app: db-manager
spec:
containers:
- name: db-manager
image: image-name
ports:
- containerPort: 3002
env:
- name: PORT
value: "3002"
- name: DATABASE_URL
value: db url
- name: KAFKA_HOST
value: kafka url
- name: PRISMA_MIGRATION_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/migration-engine-debian-openssl-1.1.x
- name: PRISMA_INTROSPECTION_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/introspection-engine-debian-openssl-1.1.x
- name: PRISMA_QUERY_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/libquery_engine-debian-openssl-1.1.x.so.node
- name: PRISMA_FMT_BINARY
value: /usr/src/app/node_modules/#prisma/engines/prisma-fmt-debian-openssl-1.1.x
but it doesn't work and the error still happens when prisma try to execute a create query. I would be very appreciated if anyone could help me. Am I doing something wrong or this is a bug in prisma when used in aws deployment?
thanks for any comments or guides about that.
Try to update to 4.8.0 Prisma version, and set the binaryTargets property in schema.prisma file to:
(...)
binaryTargets = [
"native",
"debian-openssl-1.1.x",
"debian-openssl-3.0.x",
"linux-musl",
"linux-musl-openssl-3.0.x"
]
(...)
Don't forget to run yarn prisma generate

Azure DevOps Build Agents in Kubernetes

We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]

kubectl apply -f k8s: is unable to recognize service and deployment and has no matches for kind "Service" in version "v1"

I have kubernetes running on OVH without a problem. But i recently reinstalled the build server because of other issues and setup everything but when trying to apply files it gives this horrable error.. did i miss something? and what does this error really mean?
+ kubectl apply -f k8s
unable to recognize "k8s/driver-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-mysql-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-persistent-volume-claim.yaml": no matches for kind "PersistentVolumeClaim" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried all previous answes on SO but none worked out for me. I don't think that i really need it, "correct me if i am wrong on that". I really would like to get some help with this.
I have installed kubectl and i got a config file that i use.
And when i execute the kubectl get pods command i am getting the pods that where deployed before
These are some of the yml files
k8s/driver-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: driver-cluster-ip-service
spec:
type: ClusterIP
selector:
component: driver-service
ports:
- port: 3000
targetPort: 8080
k8s/driver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: driver-deployment
spec:
replicas: 1
selector:
matchLabels:
component: driver-service
template:
metadata:
labels:
component: driver-service
spec:
containers:
- name: driver
image: repo.taxi.com/driver-service
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: taxiregistry
dockerfile
FROM maven:3.6.0-jdk-8-slim AS build
COPY . /home/app/
RUN rm /home/app/controllers/src/main/resources/application.properties
RUN mv /home/app/controllers/src/main/resources/application-kubernetes.properties /home/app/controllers/src/main/resources/application.properties
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:8-jre-slim
COPY --from=build /home/app/controllers/target/controllers-1.0.jar /usr/local/lib/driver-1.0.0.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/driver-1.0.0.jar"]
kubectl get pods command
kubectl api-versions
solution found
I had to place the binary file in a .kube folder which should be placed in the root directory
In my case i had to manually create the .kube folder in the root directory first.
After that I had my env variable set to that folder and placed my config file with my settings in there as well
Then i had to share the folder with the jenkins user and applied rights to the jenkins group
Jenkins was not up to date, so I had to restart the jenkins server.
And it worked like a charm!
Keep in mind to restart the jenkins server so that the changes you make will take affect on jenkins.

The container starts in Google Cloud Shell but fails on Kubernetes Engine

I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.
I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.
In Google Cloud Console:
git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .
# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1
Server is started
^CServer is stopped
# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1
# Deployment
kubectl create -f grpc-diagnostic.yaml
In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:
File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'
Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine?
Thanks.
requirements.txt
grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos
Dockerfile
FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH
ADD . /diagnostic/
WORKDIR /diagnostic
RUN pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]
grpc-diagnostic.yaml
apiVersion: v1
kind: Service
metadata:
name: esp-grpc-diagnostic
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000 # or 8000?
protocol: TCP
name: http2
selector:
app: esp-grpc-diagnostic
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-diagnostic
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-diagnostic
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=diagnostic.endpoints.project-id.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:8000"
]
ports:
- containerPort: 9000
- name: diagnostic
image: gcr.io/project-id/python-grpc-diagnostic-server:v1
ports:
- containerPort: 8000
That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed.
The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in documentation. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.
Right way:
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 .
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01
Another way pulling updated images with no name changed is changing imagePullPolicy that is set to IfNotPresent by default. more info

How to specify OpenShift image when creating a Job

Under OpenShift 3.3, I'm attempting to create a Job using the oc command line tool (which apparently lacks argument-based support for Job creation), but I'm having trouble understanding how to make use of an existing app's image stream. For example, when my app does an S2I build, it pushes to the app:latest image stream. I want the Job I'm attempting to create to be run in the context of a new job-specific pod using my app's image stream. I've prepared a test Job using this YAML:
---
apiVersion: batch/v1
kind: Job
metadata:
name: myapp-test-job
spec:
template:
spec:
restartPolicy: Never
activeDeadlineSeconds: 30
containers:
- name: myapp
image: myapp:latest
command: ["echo", "hello world"]
When I create the above Job using oc create -f job.yaml, OpenShift fails to pull myapp:latest. If I change image: myapp:latest to image: 172.30.194.141:5000/myapp/myapp:latest (and in doing so, specify the host and port of my OpenShift instance's internal Docker registry), this works, but I'd rather not specify this as it seems like introducing a dependency on an OpenShift implementation detail. Is there a way to make OpenShift Jobs use images from an existing app without depending on such details?
The documentation shows image: perl, but it's unclear on how to use a Docker image built and stored within OpenShift.
I learned that you simply cannot yet use an ImageStream with a Job unless you specify the full address to the internal OpenShift Docker registry. Relevant GitHub issues:
https://github.com/openshift/origin/issues/13042
https://github.com/openshift/origin/issues/13161
https://github.com/openshift/origin/issues/12672

Resources