In our team, we have an internal node application built using Express API. We have a config files written using dotenv format and packaged along with our deployment process. That means whenever I change any config value, I had to deploy my entire application code again even if there is no change in code and start the service.
I would like to ease the deployment process. So far, I've the following approaches in my mind:
Pull-out the config files from the application code package and deploy only config if there is change in config, and restart the application service. This would require the outage, but no build and deployment of the application code.
Put the configs in the DB like Redis / Mongo, and read from there. If there is a config change, do the change in DB and call /reload api which we can build in our application to reload the config object. I'm not sure about the side-effects of this approach.
Kindly please share your experience and how did you overcome.
Related
I've been using K8S for a year or so and continue to revisit a problem.
My app is running in K8S and I now need to debug it. I have a NodeJS App that I'm asking about. But similar questions could be asked about Java SpringBoot apps (but this question is just for NodeJS).
I want to use my favorite IDE (IntelliJ or VSCode) to run the app but the app is currently getting it's configuration (inside K8S) using ConfigMaps and Secrets.
(Q) Is there a "best practice" or "pattern" that can be followed that supports the DRY principle and has configuration in one place that can be used for both K8S and when running locally.
Background
I have a NodeJS app that I decided to use ENVIRONMENT variables to hold configuration information because that worked well in IntelliJ IDE, in Docker and in K8S.
I used npm dotenv and created .env.local, .env.stage, .env.prod files to support running in different environments. This worked well enough until it was running in K8S and someone wanted to tweak the configuration and didn't believe that rebuilding the image was the best way to support this. Instead the K8S experts told me I should use ConfigMaps and Secrets, so I converted from the dotenv approach to use the K8S ConfigMaps and Secrets.
I kept the old .env files around just in case and I can use them but the source code no longer call uses dotenv package.
require('dotenv').config()
process.env.myConfigVariable
So I need to either add that code back to support debugging, or manually set the environment variables. I'm wondering if there is a better approach.
I have yaml files templates to make it easy to recreate the deployment from scratch if/when needed.
.env.local
deploy/
helm/
create-configmap.yaml
create-secret.yaml
src/
common/*
appMain.js
Some of the approaches I've considered:
(a) Accept it and have two configs (one for local and one for K8S). Leave the code for dotenv but don't deploy a .env file when deploying to K8S.
(b) Run local k8s (like minikube or k3s) and use my ConfigMap and Secrets as I would with K8S. I then need to figure out how to connect from my IDE to the local K3S environment and open ports in the k3s environment to support this. Some solutions include: Bridge to Kubernetes, YouTube Video Remote Debugging in Kubernetes with Cloud Code,
Debug Java Microservices in Kubernetes with IntelliJ, and I'm sure several others.
(c) Use a JSON config file instead of dotenv. For example, use a JSON config file for everything and map that to /app/config.json and that same config file can be used in both environments. I could have config-local.json, config-stage.json, and config-prod.json to support the different environments.
(d) You tell me. What's another way?
Thanks!
I've created two pipelines: build and release for Nodejs app.
Here is the link to nodejs app repo: azure web service
Here is the tasks for build pipeline:
Here is the wwwroot folder structure:
So it is look like all required files are present.
Despite that, I'm constantly receiving:
You do not have permission to view this directory or page
I've tried to add web.config file, but it did not help.
I have front end application on same App service Plan and it works, so it is no way that I have bad service plan.
Do you have any suggestions?
Thanks a lot.
I was able to deploy my service only after using nodejs-docs-hello-world starter.
It is looks like web.config is a required file, btw, still did not find any meaningfull documentation for web.config.
Make sure your azure nodejs env support your js syntax (import from ...), in other case use webpack or typescript.
I'have found App Service Editor very helpful if you want to debug your code errors. See section Output.
I had a problem also with nodejs version, despite the fact I choosed node 12 tls during web app creation, I have noticed that my app used node 6 under the hood. So I changed default nodejs version to 10. See here how to do it
Also, I want to thank #Jason Pan for his help.
I develop an application with nodejs and react. I use dotenv for configuration in my different environment.
I use TFS 2017 for build and release my application.
What is the best practise for add my .env file of production environment?
Production configs can be difficult to maintain. You can use any module or even write your own custom module to load environment variable in to your application.
However, maintaining production .env files locally for each product (i.e. committing them or putting them inside your docker image is a bad idea. If you ever happen to change some of those configs, you will have to manually change in the code for each application, build docker image (if required) and redeploy. If there are just a couple of applications, it might be easy. But if the number of related applications grow in size (as usually happens in case of microservice based architecture) - all of them sharing credentials, ips for database connections etc. it becomes a hectic task to manually change code and redeploy all the applications.
Usually, developers tend to keep a central repository of all credentials and environment variables. For examples, AWS offers parameter store and MS Azure offers Azure Key vault. In such a case, all the parameters are fetched during start time, and all we have to do is restart the application.
So usually people set only one global varibale NODE_ENV (as prod or dev), dynamically fetch all environment variables based on NODE_ENV running something like env $(node read-env-variables.js) node app.js.
I have an Angular4 web app, deployed on Azure. Now I want to deploy this app to other environments on Azure: one for testing, one for acceptance and one for production. Every environment has different API endpoints and may have other variables, like Application Insights. All those environments run Angular in production mode.
The way Angular advises you to do this, is by the Enviroment files (environment.test.ts, enviroment.acc.ts, environment.prod.ts). I could configure all the different API endpoints in those files, and run my build with --prod for production for example.
But that is not the way I want to do this. I want to use the exact same application package deployed to test for my acceptance environment, without rebuilding the project. In Visual Studio Online, this is also really simple to configure.
The point is: how can I make my API endpoints differ per environment in that way?
The way I want to do this, is by the App Settings in Azure. But Angular can't get to those environment variables because it's running on the client side. Node.js is running on serverside and could get those App Settings - but if that's the way I need to do it, how do I make Node.js (used in Angular4 CLI) to send those server variables to the client side? And what about performance impact for this solution?
How did you fix this problem for your Angular4 apps on Azure? Is it just impossible to fix this problem with the Azure App Settings?
For everyone with the same question: I didn't fix this problem the way I described above.
At the end, I did it the way Angular wants you to do it: so rebuild for dev, rebuild for acc and rebuild for prod.
In Visual Studio Online, at build time, it builds and tests our code and it saves the uncompiled/unminified code. At release time, it builds en tests it again and releases it to the right environment with the right environment variables (--prod for example).
I don't think there is another way to fix this.
The solution is pretty old school but it works! Although you can use branching or tag for this purpose instead of cloning the code to the package.
The best solution as you said is Azure app settings will be saved as environment variable so you should implement an API with node.js and share the variables you want.
Of course there is an impact because of additional http call, but it's just one time at application start which is about max 5ms and depends on each program policy whether is impact or not.
Another option could be move the variables to the JSON file in the asset folder, and change it at deploy runtime with release pipeline. that's easier implementation but the disadvantage is you will have to use release variables instead of app settings and if you have config changes you will have to update the variable value first and redeploy it, although that works most of the times but sometimes you want to change just like a connection string and you will have to redeploy.
I'm still new to web development and I'm using Firebase to handle all my data right now.
I have everything up and running, but how do I make it so my Firebase website updates whenever I make a change to my files? Do I have to manually call firebase deploy after each change in order to see the updated site?
To deploy your changes to the Firebase Hosting server, you will indeed have to run firebase deploy.
But normally when I develop an application, I run a local web server for the most part. I then only push the changes to Firebase Hosting when I have finished the feature/bugfix that I'm working on.
For local execution, I either use http-server or a gulp script that also packs the files. The latter have the advantage that they can watch your local files for changes and execute the correct steps based on that.
I'm working on a Angular 4 app with Firebase as a backend, so the steps are
$ ng build --prod
$ firebase deploy
It really depends on what you are doing and what you're trying to deploy.
There's three different areas to deploy to:
Hosting - this is just a simple web server in which to house your HTML, JavaScript and any other static files
Database - your Firebase access rules are placed in here
Storage - access rules to the file store, typically user submitted files
Typically you'll be developing your HTML and JavaScript files locally and testing them there. When you're ready to deploy to the hosting environment you'll typically deploy via firebase deploy, this will deploy all of the local files and rules to the Firebase servers.
If your question relates to just the database rules then there is no local version or instance of this, you need to deploy changes as you make them in order to make them active.
You can perform a rules update by issuing the command firebase deploy --only database. Just make sure you have a firebase.json file with "database": { "rules": "firebase.rules.json" }, or similar defined in it.
Bonus: Use BOLT to build the rules, it transpiles into a Firebase JSON rules file but makes development so much easier especially when your rules inevitably become more complicated.