How to start docker container with dynamic url - node.js

My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?

Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200

Related

nginx ingress controller not reading configmap

I have a nginx ingress controller on aks which I configured using official guide. I also wanted to configure the nginx to allow underscores in the header so I wrote down the following configmap
apiVersion: v1
kind: ConfigMap
data:
enable-underscores-in-headers: "true"
metadata:
name: nginx-configuration
Note that I am using default namespace for nginx. However applying the configmap nothing seem to be happening. I see no events. What am I doing wrong here?
Name: nginx-configuration
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"enable-underscores-in-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-configura...
Data
====
enable-underscores-in-headers:
----
true
Events: <none>
Solution was to correctly name the configmap, firstly I did kubectl describe deploy nginx-ingress-controller which contained the configmap this deployment is looking for. In my case it was something like this --configmap=default/nginx-ingress-controller. I changed name of my configmap to nginx-ingress-controller. As soon I did that controller picked up the data from my configmap and changed the configuration inside my nginx pod.
The nginx ingress controller deployment refer to a ConfigMap which can be checked by describing the deployment.
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
You need to edit that configMap and add that parameter rather than creating new one.
kubectl edit cm nginx-configuration -n namespacename

What is the best way to run a scheduled job

I have a project that contains two parts: the first one is a Flask Api and the second one is a script that should be scheduled.
The Flask app is served through a docker image that runs in Openshift.
My problem is where should i schedule the second script. I have access to Gitlab CI/CD but that's not really its purpose.
Building a docker image and running it on Openshift is also not possible because it will run more times than needed if the pods are more than 1.
The only option I'm thinking of is just using a regular server with cron.
Do you have maybe a better solution?
Thanks
There are several aspects to your question and several ways to do it, I'll give you some brief info on where to start.
Pythonic-way
You can deploy a celery worker, that will handle the scheduled jobs. You can look into celery documentation on how to work it out in python: https://docs.celeryproject.org/en/latest/userguide/workers.html
You can probably get a grasp on how to extend your deployment to support celery from this article on dev.to, which shows a full deployment of celery:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: celery-worker
labels:
deployment: celery-worker
spec:
replicas: 1
selector:
matchLabels:
pod: celery-worker
template:
metadata:
labels:
pod: celery-worker
spec:
containers:
- name: celery-worker
image: backend:11
command: ["celery", "worker", "--app=backend.celery_app:app", "--loglevel=info"]
env:
- name: DJANGO_SETTINGS_MODULE
value: 'backend.settings.minikube'
- name: SECRET_KEY
value: "my-secret-key"
- name: POSTGRES_NAME
value: postgres
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
Kubernetes-way
In Kubernetes (Openshift is a distribution of Kubernetes) - you can create a cronjob, which will execute a specific task on a schedule, similar to this:
kubectl run --generator=run-pod/v1 hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
which I pulled from Kubernetes docs.
Cloud way
You can also use a serverless platform, e.g. AWS Lambda to execute a scheduled job. The cool thing about AWS Lambda is that their free tier will be more than enough for your use case.
See AWS example code here

YAML - Validation error during deployment using Yaml config file

I'm following this Microsoft Tutorial to create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure Cli. In the Run the Application section of this turorial, I get the following error when running the following command to deploy the application using YAML config file:
kubectl apply -f sample.yaml
error: error validating "sample.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
Question: As shown in the following sample.yaml file, the apiVersion is already set. So what this error is about and how can we fix the issue?
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
labels:
app: sample
spec:
replicas: 1
template:
metadata:
name: sample
labels:
app: sample
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: sample
image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
selector:
matchLabels:
app: sample
---
apiVersion: v1
kind: Service
metadata:
name: sample
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: sample
Issue resolved. The issue was related to copy/paste to Azure Cloud Shell. When you copy/paste content to vi editor in Azure Cloud Shell and if the content's first letter happens to be a then following may happen:
when opened vi in read mode, then by pasting, the first a may put user in edit mode and may not actually get that a inserted in the editor. So, in my case the content was pasted as follows (I'm only showing the first few lines here for brevity). So you notice here a was missing in the first line apiVersion: apps/v1 below:
sample.yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
…..
...
This happens when you use an outdated kubectl. Can you try updating to 1.2.5 or 1.3.0 and run it again
I fixed it in my case! For more context, feel free to visit here.
Summary:
If there is any file in which you are applying the yaml configs as follows:
kubectl apply -f .
then change that to the following:
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Basically, apply configs separately with each file.

Access Prodigy UI in Kubernetes Pod

I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.

HTTPS with Azure Container Services and Kubernetes

Can anyone show how to setup https on kubernetes in ACS?
Most tutorials suggest to use LetsEncrypt but does not seem to fit my case as I have an existing .pfx i would like to use.
I created the az acs using the following cli command:
az acs create --orchestrator-type kubernetes --resource-group
myResourceGroup --name myAppName --generate-ssh-keys
and once everything got created i used the following command to spin up my services and deployments
kubectl create -f myApp.yaml
where myApp.yaml reads as following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myApp-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: myAppcontainerregistry.azurecr.io/myApp-images:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myAppservice
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: myApp
which gets my app working as intended for http:// but I am not too sure what my next steps are to get https:// working. Any helpful links also are appreciated.
P.s. my app is a net core 2.0 hosted in kestrel.
Maybe we can use Nginx Ingress Controller to archive that in ACS.
The Ingress Controller works like this:
We can follow this steps to do it:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
Here a similar case, please refer to it.
By the way, here a example about configure Ingress on kubernetes using Azure Container service, please refer to it.

Resources