How to build two .env files depending on variable - node.js

I have two .env files (.env.development and .env.production) and two different build scripts, one for dev and one for prod. Now I want exactly one build script and depending on a global environment variable I want to decide what of this two .env files should be used for the build.
Is there a way to write in a build script a checker on what the environment variable is set ?

So you can solve this problem by passing environmental variables from your unix server in production, but when in development pass it from .env file, this way you don't need to add twi build scripts because it will get variables from .env or from unix env.
To pass env variables to your Node.js app from Unix OS is like
Open Terminal and write the command as
> export MY_ENV=my environment value
After that you will see the env variable with
> echo "$MY_ENV"
But I suggest you to use Docker and set the env variables to your Docker env this way you will separate your env variables from OS env and prevent inconsistencies

Related

vitepress env variables in markdown files

I'd like to retrieve the git build and branch data from an environment variable I can set/export at build time.
is there a way I can reference such an env variable somewhere in the markdown files or config files? tia!

Dockerizing Node.js app - what does: ENV PATH /app/node_modules/.bin:$PATH

I went through one of very few good dockerizing Vue.js tutorials and there is one thing I don't understand why is mandatory in Dockerfile:
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json #not sure though how it relates to PATH...
I found only one explanation here which says:
We expose all Node.js binaries to our PATH environment variable and
copy our projects package.json to the app directory. Copying the JSON
file rather than the whole working directory allows us to take
advantage of Docker’s cache layers.
Still, it doesn't made me any smarter. Anyone able to explain it in plain english?
Error prevention
I think this is just a simple method of preventing an error where Docker wasn't able to find the correct executables (or any executables at all). Besides adding another layer to your image, there is in general as far as I know no downside in adding that line to your Dockerfile.
How does it work?
Adding node_modules/bin to the PATH environment variable ensures that the executables created during the npm build or the yarn build processes can be found. You could also COPY your locally builded node_modules folder to the image but it's advised to build it inside the Docker container to ensure all binaries are adapted to the underlying OS running in the container. The best practice would be to use multistage builds.
Furthermore, adding the node_modules/bin at the beginning of the PATH environment variable ensures that exactly these executables (from the node_modules folder) are used instead of any other executables which might also be installed on the system inside the Docker image.
Do I need it?
Short answer: Usually no. It should be optional.
Long answer: It should be enough to set the WORKDIR to the path where the node_modules is located for the issued RUN, CMD or ENTRYPOINT commands in your Dockerfile to find the correct binaries and therefore to successfully get executed. But I for example had a case where Docker wasn't able to find the files (I had a pretty complex setup with a so called devcontainer in VSCode). Adding the line ENV PATH /app/node_modules/.bin:$PATH solved my problem.
So, if you want to increase the stability of your Docker setup in order to make sure that everything works as expected, just add the line.
So I think the benefit of this line is to add the node_modules path from the Docker container to the list of PATHs on the relevant container. If you're on a Mac (or Linux I think) and run:
$ echo $PATH
You should see a list of paths which are used to run global commands from your terminal i.e. gulp, husky, yarn and so on.
The above command will add node_modules path to the list of PATHs in your docker container so that such commands if needed can be run globally inside the container they will work.
.bin (short for 'binaries') is a hidden directory, the period before the bin indicates that it is hidden. This directory contains executable files of your app's modules.
PATH is just a collection of directories/folders that contains executable files.
When you try to do something that requires a specific executable file, the shell looks for it in the collection of directories in PATH.
ENV PATH /app/node_modules/.bin:$PATH adds the .bin directory to this collection, so that when node tries to do something that requires a specific module's executable, it will look for it in the .bin folder.
For each command, like FROM, COPY, RUN, CMD, ..., Docker creates a image with the result of this command, and this images are called as layers. The final image is the result of merge of all layers.
If you use the COPY command to store all the code in one layer, it will be greater than store a environment variable with path of the code.
That's why the cache layers is a benefit.
For more info about layers, take a look at this very good article.

Docker: set a return value as an environment variable

I'm trying to create a temporary folder and then set the path as an environment variable for use in later Dockerfile instructions:
FROM alpine
RUN export TEMPLATES_DIR=$(mktemp -d)
ENV TEMPLATES_DIR=$TEMPLATES_DIR
RUN echo $TEMPLATES_DIR
Above is what I've tried, any idea how I can achieve this?
Anything you run in a Dockerfile will be persisted forever in the resulting Docker image. As a general statement, you don't need to use environment variables to specify filesystem paths, and there's not much point in creating "temporary" paths. Just pick a path; it doesn't even need to be a "normal" Linux path since the container filesystem is isolated.
RUN mkdir /templates
It's common enough for programs to use environment variables for configuration (this is a key part of the "12-factor" design) and so you can set the environment variable to the fixed path too
ENV TEMPLATES_DIR=/templates
In the sequence you show, every RUN step creates a new container with a new shell, and so any environment variables you set in a RUN command get lost at the end of that step. You can't set a persistent environment variable in quite the way you're describing; Create dynamic environment variables at build time in Docker discusses this further.
If it's actually a temporary directory, and you're intending to clean it up, there are two more possibilities. One is to do all of the work you need inside a single RUN step that runs multiple commands. The environment variable won't outlive that RUN step, but it will be accessible within it.
RUN export TEMPLATES_DIR=$(mktemp -d) \
&& echo "$TEMPLATES_DIR" \
&& rm -rf "$TEMPLATES_DIR"
A second is to use a multi-stage build to do your "temporary" work in one image, but then copy the "permanent" parts of it out of that image into the final image you're actually shipping.

How to edit and save Python file via command-line

I have a Dockerfile that in one of its RUN instructions creates a conan file. I'd like to edit and save that conan file in my Dockerfile to set project specific settings. Is there a way to do so via command line, for example the Python prompt?
Alternatively is there a way to embed a Python file in a Dockerfile?
I don't know any command to do so. But I would suggest you to use another approach :
Create a Conan template file with environment variables as placeholders (youconanfile.dist).
use envsubst command in order to runtime-create the file you need with current project variables.
I use this technique in a Docker stack to generate multiple files (wp-cli.yml, deploy.php...). My example is in a Makefile. If you need to use it in your Dockerfile, it is possible assuming that
envsubst is installed on your container
COPY command is used for pushing the Conan template file in your container.

Svn post-commit hook failing on command not found

I'm trying to call a script using a post-commit hook, but it fails because it cannot find various commands.
My research has shown that there are basically no environment variables loaded when a post-commit hook is run, so I suppose that's why it can't find commands like ping.
Is there anyway to define enough in my post-commit hook so that I can call a script that runs on baseline POSIX commands?
Subversion hook scripts are executed with an empty environment. Best practice is to specify full paths in your scripts (and any other scripts that they may call), or set up the environment variables you require in the hook script itself.
From the manual:
For security reasons, the Subversion repository executes hook programs with an empty environment—that is, no environment variables are set at all, not even $PATH (or %PATH%, under Windows). Because of this, many administrators are baffled when their hook program runs fine by hand, but doesn't work when run by Subversion. Be sure to explicitly set any necessary environment variables in your hook program and/or use absolute paths to programs.
At the start of your hook, you can set up PATH and other environment variables:
LANG=en_US.UTF-8
PATH=/path/to/bin:/another/bin:$PATH

Resources