I have a NodeJS API that sits behind a Nginx reverse proxy and connects to a Redis instance. To deploy this on to OpenShift cluster, I need the following:
Nginx image e.g. registry.access.redhat.com/rhscl/nginx-114-rhel7
Redis image e.g. registry.redhat.io/rhel8/redis-5
NodeJS code hosted on GitHub
I am not sure if OpenShift Operators and Helm Charts are the right choice - they sound like an overkill (or, are they?). Then, there are YAML based installations e.g. Strimzi on OpenShift.
Given a OpenShift cluster with oc installed, perhaps, there is yet another way; where, all of the following commands are wrapped in a shell script.
git clone https://github.com/me/nodejsapi
oc new-project awesome
# Trigger S2I for NodeJS
cd nodejsapi
oc new-app .
# New applications with nginx image
oc import ...
# New applications with redis image
oc import ...
# New config map set-up through [OpenShift APIs][6]
curl ...
Can you please advise the suitable approach to install the NodeJS application and others?
If you ask me creating an Operator for this is overkill. Using a Helm chart is more of the correct abstraction. The easiest solution on OpenShift is to to use a Template.
For me if your app need many pods with different technologies, the right choice is helm chart. But, if you don’t want use it, an alternative is to use a deployment file in yaml or json.
Here an example for SQL Server 2019
https://github.com/chauuy/sqlserver.git
Note: a template file is also available to add SQL Server ephemeral (without persistent storage) as new component like MySQL etc...
Related
I've been using K8S for a year or so and continue to revisit a problem.
My app is running in K8S and I now need to debug it. I have a NodeJS App that I'm asking about. But similar questions could be asked about Java SpringBoot apps (but this question is just for NodeJS).
I want to use my favorite IDE (IntelliJ or VSCode) to run the app but the app is currently getting it's configuration (inside K8S) using ConfigMaps and Secrets.
(Q) Is there a "best practice" or "pattern" that can be followed that supports the DRY principle and has configuration in one place that can be used for both K8S and when running locally.
Background
I have a NodeJS app that I decided to use ENVIRONMENT variables to hold configuration information because that worked well in IntelliJ IDE, in Docker and in K8S.
I used npm dotenv and created .env.local, .env.stage, .env.prod files to support running in different environments. This worked well enough until it was running in K8S and someone wanted to tweak the configuration and didn't believe that rebuilding the image was the best way to support this. Instead the K8S experts told me I should use ConfigMaps and Secrets, so I converted from the dotenv approach to use the K8S ConfigMaps and Secrets.
I kept the old .env files around just in case and I can use them but the source code no longer call uses dotenv package.
require('dotenv').config()
process.env.myConfigVariable
So I need to either add that code back to support debugging, or manually set the environment variables. I'm wondering if there is a better approach.
I have yaml files templates to make it easy to recreate the deployment from scratch if/when needed.
.env.local
deploy/
helm/
create-configmap.yaml
create-secret.yaml
src/
common/*
appMain.js
Some of the approaches I've considered:
(a) Accept it and have two configs (one for local and one for K8S). Leave the code for dotenv but don't deploy a .env file when deploying to K8S.
(b) Run local k8s (like minikube or k3s) and use my ConfigMap and Secrets as I would with K8S. I then need to figure out how to connect from my IDE to the local K3S environment and open ports in the k3s environment to support this. Some solutions include: Bridge to Kubernetes, YouTube Video Remote Debugging in Kubernetes with Cloud Code,
Debug Java Microservices in Kubernetes with IntelliJ, and I'm sure several others.
(c) Use a JSON config file instead of dotenv. For example, use a JSON config file for everything and map that to /app/config.json and that same config file can be used in both environments. I could have config-local.json, config-stage.json, and config-prod.json to support the different environments.
(d) You tell me. What's another way?
Thanks!
I have a small NodeJS app I want to deploy to IBM Cloud as an "action". What I've been doing until now is just zipping the project files and creating/updating actions using the IBM Cloud CLI like this:
ibmcloud fn action create project-name C:\Users\myuser\Desktop\node-js-projects\some-project\test-folder.zip --kind nodejs:12
This was working great, however I'm now testing a new project which has a much larger modules folder, and as such IBMCloud won't accept it. I've turned my attention to using Docker as the below article explains.
https://medium.com/weekly-webtips/adding-extra-npm-modules-to-ibm-cloud-functions-with-docker-fabacd5d52f1
Everything makes sense, however I have no idea what to do with the credentials that the app uses. Since IBM Cloud seems to require you to run "docker push" I'm assuming it's not safe to include a .env file in the docker image?
I know in IBM Cloud I can pass "parameters" to an action but not sure if that helps here. Can those params be accessed from a piece of code deployed this way?
Would really appreciate some help on this one. Hoping there's a straightforward standard way of doing it that I've just missed. I'm brand new to docker so still learning.
I need help with someone familiar with AWS and web servers. Currently I'm walking through this tutorial trying to get started with NodeJS and AWS. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.html
I'm trying to figure out how to essentially do a "git clone" of a traditional project but do whatever equivalent that is for an AWS project (ex: If I wanted to work on my existing AWS project on a different machine)
I read some EB CLI documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-cmd-commands.html). I tried the "eb clone env-name". However, this actually created a separate environment on AWS within my application, which isn't what I wanted. It also only added a .gitignore and a .elasticbeanstalk folder to my directory, none of my source code for my AWS application.
I'm confused on what the standard process is for working with AWS projects. Particularly, how can I start working on my existing AWS project from another machine? (Is there anyway to pull my source code from AWS project?) Is there anyway I can view my code on AWS?
Side note: In the past I worked with Google Apps Scripts on the cloud, which used Clasp CLI for pushing and pulling code to the cloud. This was very intuitive because it was literally clasp pull to pull code from cloud and clasp push to push code to it.
Elastic Beanstalk isn't a code repo. It's a way to host applications in a simplified way, without having to configure the compute resources. Compare this to something like EC2 where all the networking and web server configuration is manual.
You can still use git to manage your source code, and there's git CLI integration with Elastic Beanstalk too. Once you've got your source code working, you bundle it up into a .zip file and upload it to EB. You can also use AWS CodeBuild to watch git repos, build source code into bundles, and automatically deploy it to Elastic Beanstalk.
If you are looking for a way to host source code on AWS, AWS CodeCommit is the managed git solution.
You should take a look at the Amplify Framework by AWS: https://aws-amplify.github.io/docs/ – here's a walkthrough that will get you were you are heading faster – sure, it mentions teams but, the result can be applied to single developers too: https://aws-amplify.github.io/docs/cli/multienv?sdk=js
Since you mentioned "view my code on AWS", you should have a look here: https://aws.amazon.com/cloud9/ – this will walk you through setting up an account, repos and working with your code on the cloud.
Good luck!
I have a GitLab repository in which I have a node.js app with express, I want "deploy" this code to my Ubuntu Server to use the express server remotely and not only local, but I don't want install node.js instead I want try use Docker.
I have read a lot about Docker, and I had understood the fundamental thing. My question is this, if I install Docker on my Ubuntu Server, how can I "deploy" my code on Docker when I push in my repository?
Basically, you have to divide the process in two steps. One is dockerizing your app, which means creating a Docker image for your repository. The second step is having your server use this image, possibly automating the process on push. So I would do something like this:
Dockerize your app. This means having a Dockerfile where you create an image that contains your app, runs it and possibly exports a port to use it externally.
Run the image in your server. Your server will need to have docker installed, and be able to get the right image (more on this later). If only one image is being used, you can just use a simple docker run command. If there are more parts involved, such as a database or a webserver, I would recommend using docker-compose.
Make the image available on your server. You have more than one option here. You can publish your image to a docker repository (private or public), or you can just download the repository in your server, and build the image there.
Lastly, you need to bind these steps. For that you need a hook that reacts on commits to the server, where you send a command to the server to fetch/build the image, and run the newer version.
You have a lot of flexibility on how to do this, actually. I would start with a simpler process, where you build the image on your server, and build on top of that according to your needs.
Dokku is a Docker based PaaS platform that provides git push deployments. It supports Heroku buildpacks to build an run your application or custom Dockerfile deployments.
I have been following this
guide to installing Parse Server on Amazon AWS with Elastic Beanstalk, and the set up is working fine, however that particular guide only addresses the installation of Parse Server and not the Parse Dashboard, which I would also like to have set up.
Being as Parse is a Node.js app, I was hoping I could get away with "npm install -g parse-dashboard" through the command line, but seeing as the changes on the instance might be overwritten by the load balancer I am not sure this is the right path.
I do know that Amazon has an EB CLI that can be used to install applications but I am not sure if that is the best/simplest way either?
What I would like is the easiest way to install Parse Dashboard and connect it with my AWS EB set-up, and I would also like for the Parse Dashboard to be easily updated when changes are made available through Github.
So my question really boils down to two alternatives, as I see it;
1) Should I install the Parse Dashboard on AWS, if so - what would be the best way to do this?
2) Can I perhaps set up a local install of the Parse Dashboard and connect it to my Parse Server hosted on AWS EB? If so, what would be the recommended method of doing this.
For question 1, you don't want to put the dashboard in public domain. Parse dashboard gives full access to modify your database.
If you still want to do it, it can be done just like the parse-server. Once you have clone the repository add an app.config file under /your_project_folder/.ebextensions/app.config with the following content:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm start"
and then follow this guide.