There are multiple node services currently deployed and running through pm2 in aws environment.
Difficulty(in terms of maintenance) I see in my current code base is that each of these node services have a a separate configuration file (config\app.json) - Though, most of the properties in these configuration files are common for all the services, each of the property is mentioned in each individual service in code. If there is a change is any of these properties, I will have to modify the change in multiple places.
I would like to centralise the configurations across multiple node services. Is there a way to do that? Expectation is to have a centralised place for maintaining configurations. Any references would help.
I am not sure how your architecture is but if you do not mind creating a small library or microservice, which will just fetch you configurations from a small NoSQL database such as Redis which stores key-value pairs, then it will provide you with configurations at a centralized place.
Now the only configuration remains here is of redis which you can add while building the service by providing it's configuration as an environment variable using some thing like yargs.
Then in every service you'll have to make only one API call to fill up your config json in your case config/app.json
Related
Good morning folks,
I would like to extend my architecture for microservice on Node.JS which is responsible for authorization to specific resource. I need to use some pattern will allow me to read dynamic configuration (list of roles/permissions) then allow for access to the specific endpoint inside Node.JS application
At this time being, I have simple database in Firestore (GCP NoSQL database) where configuration with roles and permissions are stored there but it would be great to have it dynamic managed from the file under git control, it will give me couple of benefits because roles/permissions will be under history and easy-handled by people more familiarize with git (user doesn't have to learn new firestore interface etc).
Currently I use API Gateway where Cloud Run is main backend for Node.JS application.
Do you have any ideas how to design solution with dynamic roles/permission file under the git which can be changed anytime? I need have up-to-date file with permissions inside application therefore I need to find the way with loading latest file (maybe using cache for the this file and then flushing cache after updating file)?
there is possibility to have some API endpoint where this file would be located and would be read everytime Node.js app will want to check but I don't think if it's really effective approach
In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.
I want to create a modular node.js application stack containing a set of applications. The idea is that app1, app2, etc can use the controllers and models.
Inside each app folder, I can have app specific package.json, app.js, etc.
I am using express.
I have two issues:
Is it possible to have that structure?
Why I'm not able to deploy such an app set on GCP? When I try It throws 500 internal server error.
enter image description here
To create a similar architecture, even if I didn't find a way to have the same, you should use services. According to the official GAE doc:
Use services in App Engine to factor your large apps into logical
components that can securely share App Engine features and communicate
with one another. Generally, your App Engine services behave like
microservices. Therefore, you can run your whole app in a single
service or you can design and deploy multiple services to run as a set
of microservices.
Does this work for your use case?
Regarding question 2, you didn't provide any information about your current process, so I cannot help you. Please edit the question adding the deployment configuration (app.yaml, etc.) and how is it performed. Please delete any sensitive information before posting it.
We have set of NodeJS microservices and all of our micro services has individual configurations for different environments like
default.json
dev.json
staging.json
production.json
How can I understand these things?
Is it feasible to create centralised configuration for all micro services instead of having individual?
Which is preferred centralised config or individual config?
I also google it but no info regarding this. I am mainly looking for suggestions on how this can be achieved.
Do not do it
The idea of splitting your application into microservices is to keep it independent. Therefore centralised configuration breaks this idea plus doing so (for example with some kind of proxy microservice) you would have to probably run them on the same machine.
Is is for local development ?
If it is, simply create docker-compose containers to allow developers easy setup of development environment. Still this will require multiple configurations for each container/service
Do not do microservices
Maybe what you want to active is not microservice architecture. Take a look here. Might be what you wanted instead and services should be easy to port into bounded context.
Also keep in mind that bounded contexts are not microservices
This question does not necessarily pertain to the organization of Node project structure, and more of how to represent separate, logical services. Within our team, we have requirements to create and support several services (i.e., a set of API endpoints). These services aren't directly related, so my initial reaction is they should be separate projects with separate code bases running in separate Node (or Express) servers. I'm wondering if this approach would complicate deployment and management. The alternative would be to have a single "entry point" (i.e., a single Node server) that delegates to the respective services depending on which context root or URL is seen. I'm curious which approach seems more logical and how people are handling these "microservices" in the wild now?
These services aren't directly related
These services should be separate projects/repos with distinct entry points.
I'm wondering if this approach would complicate deployment and management.
Yes, absolutely. I have several NodeJS JSON APIs in production and for each, I have 2-3 environments (canary, staging, production). When you get to about 3 production services in the wild, things can get unwieldy without some discipline.
You can manage this with documentation (via wiki or in repo) about each service and their environments as well as any other dependencies (services that this service depends on).
This also helps with emergencies where a service is slow or not responding. Sometimes, the service itself is fine but a service's dependency could be down. For example, the github API may be a dependency...it goes down.
The alternative would be to have a single "entry point" (i.e., a single Node server) that delegates to the respective services depending on which context root or URL is seen.
In some cases, you may have to also build a "gateway" service which consumes your other single-purpose services. One reason to do this is to support authentication and authorization (i.e. OAuth).
In other words, you may need multiple micro-services and a gateway service.