I have written two node scripts running on Mac/Unix which invoke rest services. Each script requires some configuration information like endpoints, keys, secrets, passwords etc. Not sure how this information should be stored or passed to the node scripts? I'm new to node and not sure where this type of information is typically stored for a node application. Its really just a bunch of scripts that I intend to call periodically, so these are not services but scripts that invoke services.
Looks like node-config module is the way to go:
https://github.com/lorenwest/node-config
Related
I have a server application using Node, and sometimes I need to run some script in it. Some examples of scenarios when this would be necessary:
During development, I need to create many entries in the database to simulate an use case.
In production, some bug happened and some information was not correctly stored in the DB, I may need to backfill it.
The way I know how do it in node is to deploy some instance of the server with an endpoint that contains the code to be run.
It is interesting to use the node server because it already has a lot of code that I can reused, for example DAO and safe create/delete funcitons.
Django has a interactive Python interpreter that does this job, but I do not know any similar way to it in Node.
Other strategies of doing this use cases are very welcome.
During development, you can just go with debugging, although that requires triggering a breakpoint. Alternatively, if it's just about your database, there are better external programs to interact with your database.
To actually answer your question: Node does have a VM module to run code and even a REPL module to help writing custom REPLs. This does require some work to link up your APIs though, but doable.
As for how you actually interact with that REPL, there are several options. Using a raw socket (and Telnet), a terminal on your site communicating over a WebSocket, a simple HTTP endpoint, ...
You can add scripts in your package.json to handle this with a path to the script. For example "seed:db": "node ./src/seeder.js -i", "drop:db": "node ./src/seeder.js -d" where the i an d flags will be used to determine if i am inserting or deleting, and can be gotten with process.argv[2]
O.! I'm the backend half of a small team that primarily builds apps in postgres/nodejs/apollo graphql/react stack.
In my hobby projects I use golang and have gotten decent at constructing CLI apps with cobra/viper. I'm starting to play with the idea of moving all the critical business logic and data access into reusable small CLI apps built in golang and distributed as binaries. I envision the output of these cli's to produce json as to be machine readable.
The nodejs graphql servers would then become more shallow wrappers around the CLI binaries and called using something like const { stdout, stderr } = await exec('<<MY CLI --here >>');
Separating out business logic and data access into a CLI is attractive to me for reusability in non server scenarios. Also I just really like writing in go more than node. This seems like a decent idea, but perhaps I am overlooking some pitfall to this approach? Anyone taken an approach like this?
Use utility only when individual users use them from terminal.
Launching so many cli processes from nodejs servers would not be efficient and scalable in case the node server gets too many concurrent requests.Launching too many cli processes would make it slow and consume system resources.
I would use an API. The node server would pipe the request to the go api server. Now about cli, to be used in standalone mode from terminal by a user, you need to add all your logic into a separate module (lib). This module lib can be hosted (or used) into Go api server as well as cmd. The cmd utility and go http api server processes would just be host while actual thing in module.
Or even better the command-line utility will have 2 modes to run as http server or as standalone utility.
Basically I don't want to use an existing mongodb database site like the official mongocloud or whatever-- how can I do what they do, but myself? Do I just include the database folder, along with all of the mongodb executable, in my nodejs folder and call require("child_process").spawn("mongodb.exe", /insert params here/), or is there some kind of way to do this in the mongo module?
And also do I need my own virtual machine to be able to do this or can the following work on a standard heroku nodejs application for example?
Anyone?
Heroku's hosting solution has only ephemeral volumes, so you can't use it for a database. Any files you create are temporary and will be purged on a regular basis.
For example, when your application is idle Heroku will de-provision that resource and clear out any data you've left there.
You can't use Heroku like this, you must use an external database service, or one of their many add-on offerings.
I am looking to bind a PCF (Pivotal Cloud Foundry) Service to allow us to set certain api endpoints used by our UI within PCF environment. I want to use the values in this service to overwrite the values in the root directory file, 'config.json'. Are there any examples out there that accomplish this sort of thing?
The primary way to tackle this is to have your application do this parsing. Most (all?) programming languages give you the ability to load environment variables and to parse JSON. Using these capabilities, what you'd want to do is to read the VCAP_SERVICES environment variable and parse the JSON. This is where the platform will insert the information from your bound services. From there you, you have the configuration information so you can configure your app using the values from your bound service.
Manual Ex:
var vcap_services = JSON.parse(process.env.VCAP_SERVICES)
or you can use a library. There's a handy Node.js library called cfenv. You can read more about both of these options in the docs.
https://docs.cloudfoundry.org/buildpacks/node/node-service-bindings.html
If you cannot read the configuration inside of your application, perhaps there's a timing problem and you need the information before your app starts, you can use the platform's pre-runtime hooks.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The runtime hooks allow your application to include a file called .profile which will execute before your application. The .profile file is a simple bash script which can do anything needed to ready your application to be run. The only catch is that this needs to happen very quickly because it must complete before your application is able to start up and your application has a finite amount of time to start (usually 60s).
In your case, you could use jq to parse you values and insert them info your config file, perhaps using sed to overwrite a template value. Another option would be to run a small Node.js script, since your app is using Node.js it should be available on the path when this script runs, to read the environment variables and generate your config file.
Hope that helps!
I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives