I am trying to understand a few things that arose from writing the COPY command in a SQL (.sql) file generated by Prisma. I understand that the filepath given to COPY is absolute/relative to where the database is running.
COPY table(value,other) FROM 'how/do/i/get/the/right/path' DELIMITER ',' CSV HEADER
Can someone explain when we have a hosted server and database (I believe they are usually on separate machines) how we would COPY from a csv file. The file is part of my typical git repo. After reading, my understanding is that the file would actually need to be where the database is hosted, is that correct? I am not sure I have a full grasp on this. Do hosted db servers also store files? I thought it would just be the database (I understand its a machine that could technically have anything on it but is that ever done?)
What would be the best way to access this file from the db? Would I use psql to connect to my database and then ssh into the server? I think there are different solutions like running a script which has psql in it and using the variation \COPY.
What I wanted ideally was to have the file as part of my repo and in a Prisma migrate file copy over the contents into a database table. If the above was incorrect, would someone be able to clarify how I would get the correct path into the command? I want it to be in a .sql file and put in the host variable (assuming that would work, depending on clarity on the above points regarding where files live).
Thanks!
Sources used:
https://www.postgresql.org/docs/current/sql-copy.html
https://www.cybertec-postgresql.com/en/copy-in-postgresql-moving-data-between-servers/
I'm completely newbie with Docker and to make a deploy in production I need to read the "environment" variables instead of a file or instead of the package.json script line from the operating system of the container.
I know how to read variables from a .env file or from the script line but I don't know how to do it from the system and I don't know if its possible to read these variables from the system.
How can I do that? Is it possible?
The process doesn't change. You still use the below to read environment variables that the process is running in. Reference for Node.
const envVariable = process.env.NAME_OF_VARIABLE;
Setting the variable in a Dockerfile can be done using the below. Docs for this are here.
ENV <key>=<value> ...
Things get more complicated when using things like LXC or Kubernetes though.
I am building a relatively complex distributed Node system. Let's say there are two processes (node apps), A and B. They are defined in separate projects.
In addition, there are a couple of custom made node modules, being used in both A and B. Let's call them M and N. In addition, M is using N.
How should I correctly handle environment vars?
I guess I should define .env for both main processes (A and B), handle all ENV vars from there and simply pass the needed env vars from there, down to M and N. This way, M and N (and other internal modules) will receive their own ENV vars passed as parameters on creation.
Is this approach correct?
Having modules getting direct access to process.env is not a good idea, and modules having their own .env file is an even worse idea.
.env files should not be added to source control (ie git) because they change with the environment (dev, prod, pre-prod) and sometimes contains sensitive information (like AWS secret keys). So that would require you to paste a .env file each time you install your node_modules making your deployment process more complex.
The .env file loaded inside your module could merge in unexpected ways with the .env of your root app (remember there is only one process.env).
Imagine a case where your module would need to behave differently in two parts of your application. How would you override the data loaded via the .env file only in one place?
So in my opinion, your guess is correct: don't put .env in node_modules.
// This is better...
nModule.someMethod(process.env.PARAM1, process.env.PARAM2);
// ...than this
process.env.PARAM1 = '';
process.env.PARAM2 = '';
nModule.someMethod();
I feel strongly that environment variables are reserved for, well, the environment. This to me implies a few things:
Inside the code, these variables should be global, i.e., accessed only via process.env.
They are not passed down to other modules. Certainly it's a good idea to make dependencies customizable with parameters you can pass to the functions they export. But the environment should not be used for that.
How you load values into process.env is really a question of how you start your programs A and B. I personally prefer systemd services for that which have excellent support for defining the runtime environment. The dotenv package seems more like a crutch, but it's fine from the program's perspective.
Your approach sounds correct and it should work. When you define .env file and use dotenv package, you will be able to access all the variables inside .env in your code. That means that custom made node modules will also be able to access it, and you don't have to pass them anything (you can access them directly with process.env.NAME_OF_ENVIRONMENT_VARIABLE).
SUMERIZE: Create .env file in both A and B, use dotenv package, and then you can access environment variables directly inside the code with process.env.NAME_OF_ENVIRONMENT_VARIABLE (in custom node modules also). You can create constructor (for custom modules) in modules A and B, and just pass process.env.ENV_WHATEVER as a parameters. That is even better approach since your custom module's logic will be independent from the rest of the app (it will depend only on the input).
NOTE: Don't commit your .env to Git since .env usually have some confidential information. The best practice is to create a .gitignore file and add .env in it.
RECOMMENDATION You can keep all your .env files in one centralized place for better management. You can check some password management tools like https://www.dashlane.com/ or https://www.lastpass.com/.
You mentioned complex distributed system. Distribution can also be associated with a hub / centralized key/variable management system. I'm assuming there are many env variables that are shared across your multiple apps.
Isn't it a plus if you create a centralized node that contain all your required variables and protected them with authentication?
All nodes (regardless of app) to maintain only a single password string in order to authenticated with the centralized node, and load all required variables from a single .env file.
Alternatively you can use a system like vault
I'm new to NodeJS and trying to understand the internal working of the system. Normally we create a .env file or some other configuration file to keep and manage secrets. The same environment values can be kept at the system level like using "export" command on mac.
what I'm trying to understand is how NodeJS loads and reads these value when we start the program either from a configuration file or from system itself.
You can dig through the NodeJS source code to actually see how the environment is provided to NodeJS, e.g. here through the RealEnvStore class. As you can see uv_os_getenv abstracts access to the env depending on the actual operation system.
At least on unix systems uv_os_getenv uses the environ variable which basically references all the environment variables that are made available to the node process as you can see here.
I am looking to bind a PCF (Pivotal Cloud Foundry) Service to allow us to set certain api endpoints used by our UI within PCF environment. I want to use the values in this service to overwrite the values in the root directory file, 'config.json'. Are there any examples out there that accomplish this sort of thing?
The primary way to tackle this is to have your application do this parsing. Most (all?) programming languages give you the ability to load environment variables and to parse JSON. Using these capabilities, what you'd want to do is to read the VCAP_SERVICES environment variable and parse the JSON. This is where the platform will insert the information from your bound services. From there you, you have the configuration information so you can configure your app using the values from your bound service.
Manual Ex:
var vcap_services = JSON.parse(process.env.VCAP_SERVICES)
or you can use a library. There's a handy Node.js library called cfenv. You can read more about both of these options in the docs.
https://docs.cloudfoundry.org/buildpacks/node/node-service-bindings.html
If you cannot read the configuration inside of your application, perhaps there's a timing problem and you need the information before your app starts, you can use the platform's pre-runtime hooks.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The runtime hooks allow your application to include a file called .profile which will execute before your application. The .profile file is a simple bash script which can do anything needed to ready your application to be run. The only catch is that this needs to happen very quickly because it must complete before your application is able to start up and your application has a finite amount of time to start (usually 60s).
In your case, you could use jq to parse you values and insert them info your config file, perhaps using sed to overwrite a template value. Another option would be to run a small Node.js script, since your app is using Node.js it should be available on the path when this script runs, to read the environment variables and generate your config file.
Hope that helps!