Nextjs App reading configuration from Azure App Service - azure

We have a nextjs project which is build by docker and deploy into Azure App Service (container). We also setup configuration values within App Service and try to access it, however its not working as expected.
Few things we tried
Restarting the App Service after adding new configuration
removing .env file while building the docker image
including .env file while building the docker image
Here's how we read try to read the environment variables within the App Service
const env = process.env.NEXT_PUBLIC_ENV;
const A = process.env.NEXT_PUBLIC_AS_VALUE;
Wondering if this can actually be done?
Just thinking something out loud below,
Since we're deploying the docker image within App Service's Container (Linux).. does that mean, the container can't pull the value from this environment variable?
Docker image already perform the npm run build, would that means the image is in static formed (build time). It will never ready from App Service configuration (runtime).

After a day or 2, I came up with an alternative solution by passing the environment values in Dockerfile while building my project.
TLDR
Pass your env values within dockerfile
Set all your env (dev, staging, prod, etc) var values in Pipeline variable.
Set a "settable" variable inside the Pipeline variable too, so you can set to build different environment while triggering your pipeline (eg, buildEnv)
Setup a bash script to perform variable text changing (eg, from firebaseApiKey to DEVfirebaseApiKey ) according to env received from buildEnv.
Use "replace token" task from Azure Pipeline to replace values inside Dockerfile
Build your docker image
Huaala~ now you get your environment specific build
Details
Within your Dockerfile you can place your env variable like this
RUN NEXT_PUBLIC_ENV=#{env}# \
NEXT_PUBLIC_FIREBASE_API_KEY=#{firebaseApiKey}# \
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=#{firebaseAuthDomain}# \
NEXT_PUBLIC_FIREBASE_PROJECT_ID=#{firebaseProjectId}# \
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=#{firebaseStorageBucket}# \
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=#{firebaseMessagingSenderId}# \
NEXT_PUBLIC_FIREBASE_APP_ID=#{firebaseAppId}# \
NEXT_PUBLIC_FIREBASE_MEASUREMENT_ID=#{firebaseMeasurementId}# \
NEXT_PUBLIC_BASE_URL=#{baseURL}# \
npm run build
These values set (eg, baseURL, firebaseMeasurementId, etc) are only a placeholder, because we need to change them later with bash script according to the buildEnv we receive. (buildEnv is settable when you trigger a build)
Bash script sample as below. What it does is that it will look within your Dockerfile for the word env and change to DEVenv / UATenv / PRODenv based on what you're passing to buildEnv
#!/bin/bash
case $(buildENV) in
dev)
sed -i -e 's/env/DEVenv/g' ./Dockerfile
;;
uat)
sed -i -e 's/env/UATenv/g' ./Dockerfile
;;
prod)
sed -i -e 's/env/PROenvD/g' ./Dockerfile
;;
*)
echo -n "unknown"
;;
esac
When this is complete, your "environment specific" docker file is sort of created. Now we'll make use of the "replace token" task from Azure Pipeline to replace the values inside Dockerfile. **Make sure you have all your values setup in Pipeline Variable!
Lastly all you may build your docker image and deploy :)

Related

npm adds whitespace when setting env variable in package.json

I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?
i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

How build Node JS application using Docker for different staging environments

this should have been routine, but I haven't been able to find any way. I am using Node with Docker for packaging. I have three environments: dev, qa, and prod, as usual. I have three configuration files with numerous variables: dev-config.json, qa-config.json, prod-config.json. I need Docker to pick up files and package them as config.json inside the Docker image. How to go about pl.. Thx
For building an image with only the correct config file included, you can use --build-arg.
Add
ARG CONFIG_FILE
...
COPY $CONFIG_FILE config.json
in your docker file and then use
docker build --build-arg CONFIG_FILE=prod-config.json .
to build your image
EDIT
The other possibility is to put all your config files in your image and decide which one to use, when you startup the container. For instance, you could read the desired name of your config file from an environment variable (at runtime of the container, not to confuse with ARG and --build-arg at build-time of the image) which can be set when you start your container
Iw somewhere in your node app
// read the config file name from the environment variable
// and have a fallback if the environment variable is not defined
const configfilename = process.env.CONFIG_FILE || "config.json";
and when you start your container you can do
docker run --env CONFIG_FILE=prod-config.json YOURIMAGE
to set the environment variable. This way, you will have only one image.
A third possibility would be to not add your configs in the container at all, but load them from external volume that you mount when you run the container. If you have different volumes for diffent configs, you can again decide at startup, which volume to mount. As you can give your config file the same name on every volume, your app does not need to be aware of any environment variables, you just have to make sure, you use the correct path to your config file and all volumes have the same file structure.
Ie in your node app
const configfile = '/config/config.json';
and then you start your container mounting the correct config directory
docker run -v /host/path/to/prod-config:/config YOURIMAGE

rsync files from inside a docker container?

We are using Docker for the build/deploy of a NodeJS app. We have a test container that is built by Jenkins, and executes our unit tests. The Dockerfile looks like this:
FROM node:boron
# <snip> some misc unimportant config here
# Run the tests
ENTRYPOINT npm test
I would like to modify this step so that we run npm run test:cov, which runs the unit tests + generates a coverage report HTML file. I've modified the Dockerfile to say:
# Run the tests + generate coverage
ENTRYPOINT npm run test:cov
... which works. Yay!
...But now I'm unsure how to rsync the coverage report ( generated by the above command inside the Dockerfile ) to a remote server.
In Jenkins, the above config is invoked this way:
docker run -t test --rm
which, again, runs the above test and exists the container.
how can I add some additional steps after the entrypoint command executes, to (for example) rsync some results out to a remote server?
I am not a "node" expert, so bear with me on the details.
First of all, you may consider if you need a separate Dockerfile for running the tests. Ideally, you'd want your image to be built, then tested, without modifying the actual image.
Building a test-image that uses your NodeJS app as a base image (FROM my-nodejs-image) could do the trick, but may not be needed if all you have to do is run a different command / entrypoint on the image.
Secondly; stateful data (the coverage report falls into that category) should not be stored inside the container (i.e., not stored on the container's filesystem). You want your containers to be ephemeral, and anything that should live beyond the container's lifecycle (anything that should be preserved after the container itself is gone), should be stored outside of the container; either in a "volume", or in a bind-mounted directory.
Let me start with the "separate Dockerfile" point. Let's say, your NodeJS application Dockerfile looks like this;
FROM node:boron
COPY package.json /usr/src/app/
RUN npm install && npm cache clean
COPY . /usr/src/app
CMD [ "npm", "start" ]
You build your image, and tag it, for example, with the commit it was built from;
docker build -t myapp:$GIT_COMMIT .
Once the image was built succesfully, you want to test it. Probably a quick test to verify it actually "runs". Many ways to do that, perhaps something like;
docker run \
-d \
--rm \
--network=test-network \
--name test-{$GIT_COMMIT} \
myapp:$GIT_COMMIT
And a container to test it actually does something;
docker run --rm --network=test-network my-test-image curl test-{$GIT_COMMIT}
Once tested (and the temporary container removed), you can run your coverage tests, however, instead of writing the coverage report inside the container, write it to a volume or bind-mount. You can override the command to run in the container with docker run;
mkdir -p /coverage-reports/{$GIT_COMMIT}
docker run \
--rm \
--name test-{$GIT_COMMIT}\
-v /coverage-reports/{$GIT_COMMIT}:/usr/src/app/coverage \
myapp:$GIT_COMMIT npm run test:cov
The commands above;
Create a unique local directory to store the test-artifacts (coverage report)
Runs the image you built (and tagged myapp:$GIT_COMMIT)
Bind-mount the /coverage-reports/{$GIT_COMMIT} into the container at usr/src/app/coverage
Runs the coverage tests (which will write to /usr/src/app/coverage if I'm not mistaken - again, not a Node expert)
Removes the container once it exits
After the container exits, the coverage report is stored in /coverage-reports/{$GIT_COMMIT} on the host. You can use your regular tools to rsync those where you want.
As an alternative, you can use a volume plugin to write the results to (e.g.) an s3 bucket, which saves you from having to rsync the results.
Once tests are successful, you can docker tag the image to bump your application's version (e.g. docker tag myapp:1.0.12345), docker push to your registry, and deploy the new version.
Make a script to execute as the entrypoint and put the commands in the script. You pass in args when calling docker run and they get passed to the script.
The docs have an example of the postgres image's script. You can build off that.
Docker Entrypoint Docs

Docker and securing passwords

I've been experimenting with Docker recently on building some services to play around with and one thing that keeps nagging me has been putting passwords in a Dockerfile. I'm a developer so storing passwords in source feels like a punch in the face. Should this even be a concern? Are there any good conventions on how to handle passwords in Dockerfiles?
Definitely it is a concern. Dockerfiles are commonly checked in to repositories and shared with other people. An alternative is to provide any credentials (usernames, passwords, tokens, anything sensitive) as environment variables at runtime. This is possible via the -e argument (for individual vars on the CLI) or --env-file argument (for multiple variables in a file) to docker run. Read this for using environmental with docker-compose.
Using --env-file is definitely a safer option since this protects against the secrets showing up in ps or in logs if one uses set -x.
However, env vars are not particularly secure either. They are visible via docker inspect, and hence they are available to any user that can run docker commands. (Of course, any user that has access to docker on the host also has root anyway.)
My preferred pattern is to use a wrapper script as the ENTRYPOINT or CMD. The wrapper script can first import secrets from an outside location in to the container at run time, then execute the application, providing the secrets. The exact mechanics of this vary based on your run time environment. In AWS, you can use a combination of IAM roles, the Key Management Service, and S3 to store encrypted secrets in an S3 bucket. Something like HashiCorp Vault or credstash is another option.
AFAIK there is no optimal pattern for using sensitive data as part of the build process. In fact, I have an SO question on this topic. You can use docker-squash to remove layers from an image. But there's no native functionality in Docker for this purpose.
You may find shykes comments on config in containers useful.
Our team avoids putting credentials in repositories, so that means they're not allowed in Dockerfile. Our best practice within applications is to use creds from environment variables.
We solve for this using docker-compose.
Within docker-compose.yml, you can specify a file that contains the environment variables for the container:
env_file:
- .env
Make sure to add .env to .gitignore, then set the credentials within the .env file like:
SOME_USERNAME=myUser
SOME_PWD_VAR=myPwd
Store the .env file locally or in a secure location where the rest of the team can grab it.
See: https://docs.docker.com/compose/environment-variables/#/the-env-file
Docker now (version 1.13 or 17.06 and higher) has support for managing secret information. Here's an overview and more detailed documentation
Similar feature exists in kubernetes and DCOS
You should never add credentials to a container unless you're OK broadcasting the creds to whomever can download the image. In particular, doing and ADD creds and later RUN rm creds is not secure because the creds file remains in the final image in an intermediate filesystem layer. It's easy for anyone with access to the image to extract it.
The typical solution I've seen when you need creds to checkout dependencies and such is to use one container to build another. I.e., typically you have some build environment in your base container and you need to invoke that to build your app container. So the simple solution is to add your app source and then RUN the build commands. This is insecure if you need creds in that RUN. Instead what you do is put your source into a local directory, run (as in docker run) the container to perform the build step with the local source directory mounted as volume and the creds either injected or mounted as another volume. Once the build step is complete you build your final container by simply ADDing the local source directory which now contains the built artifacts.
I'm hoping Docker adds some features to simplify all this!
Update: looks like the method going forward will be to have nested builds. In short, the dockerfile would describe a first container that is used to build the run-time environment and then a second nested container build that can assemble all the pieces into the final container. This way the build-time stuff isn't in the second container. This of a Java app where you need the JDK for building the app but only the JRE for running it. There are a number of proposals being discussed, best to start from https://github.com/docker/docker/issues/7115 and follow some of the links for alternate proposals.
An alternative to using environment variables, which can get messy if you have a lot of them, is to use volumes to make a directory on the host accessible in the container.
If you put all your credentials as files in that folder, then the container can read the files and use them as it pleases.
For example:
$ echo "secret" > /root/configs/password.txt
$ docker run -v /root/configs:/cfg ...
In the Docker container:
# echo Password is `cat /cfg/password.txt`
Password is secret
Many programs can read their credentials from a separate file, so this way you can just point the program to one of the files.
run-time only solution
docker-compose also provides a non-swarm mode solution (since v1.11:
Secrets using bind mounts).
The secrets are mounted as files below /run/secrets/ by docker-compose. This solves the problem at run-time (running the container), but not at build-time (building the image), because /run/secrets/ is not mounted at build-time. Furthermore this behavior depends on running the container with docker-compose.
Example:
Dockerfile
FROM alpine
CMD cat /run/secrets/password
docker-compose.yml
version: '3.1'
services:
app:
build: .
secrets:
- password
secrets:
password:
file: password.txt
To build, execute:
docker-compose up -d
Further reading:
mikesir87's blog - Using Docker Secrets during Development
My approach seems to work, but is probably naive. Tell me why it is wrong.
ARGs set during docker build are exposed by the history subcommand, so no go there. However, when running a container, environment variables given in the run command are available to the container, but are not part of the image.
So, in the Dockerfile, do setup that does not involve secret data. Set a CMD of something like /root/finish.sh. In the run command, use environmental variables to send secret data into the container. finish.sh uses the variables essentially to finish build tasks.
To make managing the secret data easier, put it into a file that is loaded by docker run with the --env-file switch. Of course, keep the file secret. .gitignore and such.
For me, finish.sh runs a Python program. It checks to make sure it hasn't run before, then finishes the setup (e.g., copies the database name into Django's settings.py).
There is a new docker command for "secrets" management. But that only works for swarm clusters.
docker service create
--name my-iis
--publish target=8000,port=8000
--secret src=homepage,target="\inetpub\wwwroot\index.html"
microsoft/iis:nanoserver
The issue 13490 "Secrets: write-up best practices, do's and don'ts, roadmap" just got a new update in Sept. 2020, from Sebastiaan van Stijn:
Build time secrets are now possible when using buildkit as builder; see the blog post "Build secrets and SSH forwarding in Docker 18.09", Nov. 2018, from Tõnis Tiigi.
The documentation is updated: "Build images with BuildKit"
The RUN --mount option used for secrets will graduate to the default (stable) Dockerfile syntax soon.
That last part is new (Sept. 2020)
New Docker Build secret information
The new --secret flag for docker build allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image.
id is the identifier to pass into the docker build --secret.
This identifier is associated with the RUN --mount identifier to use in the Dockerfile.
Docker does not use the filename of where the secret is kept outside of the Dockerfile, since this may be sensitive information.
dst renames the secret file to a specific file in the Dockerfile RUN command to use.
For example, with a secret piece of information stored in a text file:
$ echo 'WARMACHINEROX' > mysecret.txt
And with a Dockerfile that specifies use of a BuildKit frontend docker/dockerfile:1.0-experimental, the secret can be accessed.
For example:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
This Dockerfile is only to demonstrate that the secret can be accessed. As you can see the secret printed in the build output. The final image built will not have the secret file:
$ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt .
...
#8 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#8 digest: sha256:5d8cbaeb66183993700828632bfbde246cae8feded11aad40e524f54ce7438d6
#8 name: "[2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret"
#8 started: 2018-08-31 21:03:30.703550864 +0000 UTC
#8 1.081 WARMACHINEROX
#8 completed: 2018-08-31 21:03:32.051053831 +0000 UTC
#8 duration: 1.347502967s
#9 [3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
#9 digest: sha256:6c7ebda4599ec6acb40358017e51ccb4c5471dc434573b9b7188143757459efa
#9 name: "[3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar"
#9 started: 2018-08-31 21:03:32.052880985 +0000 UTC
#9 1.216 WARMACHINEROX
#9 completed: 2018-08-31 21:03:33.523282118 +0000 UTC
#9 duration: 1.470401133s
...
The 12-Factor app methodology tells, that any configuration should be stored in environment variables.
Docker compose could do variable substitution in configuration, so that could be used to pass passwords from host to docker.
Starting from Version 20.10, besides using secret-file, you could also provide secrets directly with env.
buildkit: secrets: allow providing secrets with env moby/moby#41234 docker/cli#2656 moby/buildkit#1534
Support --secret id=foo,env=MY_ENV as an alternative for storing a secret value to a file.
--secret id=GIT_AUTH_TOKEN will load env if it exists and the file does not.
secret-file:
THIS IS SECRET
Dockerfile:
# syntax = docker/dockerfile:1.3
FROM python:3.8-slim-buster
COPY build-script.sh .
RUN --mount=type=secret,id=mysecret ./build-script.sh
build-script.sh:
cat /run/secrets/mysecret
Execution:
$ export MYSECRET=theverysecretpassword
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain --secret id=mysecret,env=MYSECRET -t abc:1 . --no-cache
......
#9 [stage-0 3/3] RUN --mount=type=secret,id=mysecret ./build-script.sh
#9 sha256:e32137e3eeb0fe2e4b515862f4cd6df4b73019567ae0f49eb5896a10e3f7c94e
#9 0.931 theverysecretpassword#9 DONE 1.5s
......
With Docker v1.9 you can use the ARG instruction to fetch arguments passed by command line to the image on build action. Simply use the --build-arg flag. So you can avoid to keep explicit password (or other sensible information) on the Dockerfile and pass them on the fly.
source: https://docs.docker.com/engine/reference/commandline/build/ http://docs.docker.com/engine/reference/builder/#arg
Example:
Dockerfile
FROM busybox
ARG user
RUN echo "user is $user"
build image command
docker build --build-arg user=capuccino -t test_arguments -f path/to/dockerfile .
during the build it print
$ docker build --build-arg user=capuccino -t test_arguments -f ./test_args.Dockerfile .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> c51f86c28340
Step 2 : ARG user
---> Running in 43a4aa0e421d
---> f0359070fc8f
Removing intermediate container 43a4aa0e421d
Step 3 : RUN echo "user is $user"
---> Running in 4360fb10d46a
**user is capuccino**
---> 1408147c1cb9
Removing intermediate container 4360fb10d46a
Successfully built 1408147c1cb9
Hope it helps! Bye.
Something simply like this will work I guess if it is bash shell.
read -sp "db_password:" password | docker run -itd --name <container_name> --build-arg mysql_db_password=$db_password alpine /bin/bash
Simply read it silently and pass as argument in Docker image. You need to accept the variable as ARG in Dockerfile.
While I totally agree there is no simple solution. There continues to be a single point of failure. Either the dockerfile, etcd, and so on. Apcera has a plan that looks like sidekick - dual authentication. In other words two container cannot talk unless there is a Apcera configuration rule. In their demo the uid/pwd was in the clear and could not be reused until the admin configured the linkage. For this to work, however, it probably meant patching Docker or at least the network plugin (if there is such a thing).

Resources