Node.js Environment Variable configure for windows and Linux - node.js

I am developing a Node.js application on Windows 10. But I will deploy my application in a Linux server. I am trying to follow the good practices of Node.js application development.
One of the good practices is separating the system variables like PORT, HOST, debug_logic out of source code and provide these as environment variables while deploying.
How can I achieve following features of my application?
Develop application in Windows 10 and deploy in Linux server and provide the environment variable easily
Want to debug while the application is developing and stop debugging while deploying using an environment variable.
I add the following script under package.json scripts key.
"start": "set \"PORT=80\" & set \"HOST=localhost\" & node server.js"
This is kind of working now. But I have a lot more environment variables to work in the future and moreover, I have to do that for Linux also.
I also know that this can be achieved using a .env file and tried that using dotenv module and didn't like the module either.

You may use this package https://www.npmjs.com/package/cross-env to setup environment variables, e.g.:
"start": "cross-env \"PORT=80\" & cross-env \"HOST=localhost\" & node server.js"

This is where containerization come to place. You can use docker as it's separate Operating System from application. You can know how to install docker for windows from this link here. I'll let you know how to get started.
Add the following Dockerfile in you project root:
FROM node:alpine
RUN npm init -y
RUN npm install express
also add the docker-compose.yml in the same directory:
version: "3"
services:
app:
build: ./
volumes:
- /path/to/local/app:/app
working_dir: /app
environment:
- DEBUG=1
- PORT=3000
ports:
- 3000:3000
command: node server.js
as you can see the key environment can hold all variables that you want to hold throw your app.
Once you finished run in the app root: docker-compose up -d and check http://localhost:3000
Docker can be installed and deployed in both windows and Linux. You can check out the documentation for more details.

Related

Dockerize and reuse NodeJS dependency

I'm developing an application based on a microfrontend architecture, and in a production environment, the goal is to have each microfrontend as a dockerized NodeJS application.
Right now, each microfrontend depends on an internal NPM package developed by the company, and I would like to know if it's possible to have that dependency as an independent image, where each microfrontend would, some how, reuse it instead of installing it multiple times (one for each microfrontend)?
I've been making some tests, and I've managed to dockerize the internal dependency, but haven't been able to make it reachable to the microfrontends? I was hopping that there was a way to set it up on package.json, something similar to how it's made for local path, but since the image's scope are isolated, they can't find out where's that dependency.
Thanks in advance.
There are at least 2 solutions to your question
create a package and import it in every project (see Verdaccio for local npm registry)
Use a single Docker image with shared node_modules and change command in docker-compose
Solution 2
Basically the idea is to put all your microservices into a single Docker image In a structure like this:
/service1
/service2
/service3
/node_modules
/package.json
Then on your docker-compose.yaml
version: '3'
services:
service1:
image: my-image:<version or latest>
command: npm run service1:start
environment:
...
service2:
image: my-image:<version or latest>
command: npm run service2:start
environment:
...
service3:
image: my-image:<version or latest>
command: npm run service3:start
environment:
...
The advantage is that you now you have a single image to deploy in production and all the shared code is in one place

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

How setup a Node.js development environment using Docker Compose

I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it

Does a Node Docker container has to be started for Gulp us?

I don't know if my question is stupid but, after hours crushing my brain on it, I prefer to ask you.
I'm trying to run NPM on a Docker container (windows).
I don't want a real "node server" ; I just use NPM to run utilities like gulp, webpack, browserify, vue.js...
So I added this in my ./docker-compose.yml file :
services:
node:
build: docker/node
environment:
- NODE_ENV=dev
Until here, everything sounds good in my head.
Now here is the content of my ./docker/node/Dockerfile :
# See https://github.com/nodejs/docker-node#dockerfile
FROM node:6
EXPOSE 8080
USER node
# set the working directory
RUN mkdir /home/node/app
WORKDIR /home/node/app
# delete existing modules and re-install dependencies
COPY package.json /home/node/app/package.json
RUN rm -rf node_modules
RUN npm install
# launch the app
# EDIT : I removed this line to solve the issue. See answer.
CMD ["npm", "start"]
To create it, I just followed official tutorials.
And then, here is my ./docker/node/package.json file :
{
"name": "custom-symfony-project",
"version": "1.0.0",
"dependencies": {
"gulp": "^4.0.0"
},
"devDependencies": {
"gulp": "^4.0.0"
}
}
I also have 3 containers : PHP, MySQL and NGINX but they are independants and they all start correctly, so I don't thing they are the pain of the issue.
So I run my docker-compose build : everything works fine.
But when I run docker-compose start I got thing in my Node container logs :
npm ERR! missing script: start
I tried to add an empty server.js but the container doesn't start.
So my question is : do I really need to start something ? Do I need a server.js ? I don't what to put into it.
When I was using npm with Ubuntu, I've just never specified a start script..!
Thanks !
Containers are designed to run as long as the process they support is running and a container should run only one process. In your case, you are removing the CMD line, which is starting the process the container supports, so the container has nothing to do and just shuts down immediately.
You should think about your Docker container as a process, not a VM (virtual machine). A VM would have Node and other dependencies loaded and it would be ready to run commands any time you log into it, but a container spins up to run one command and then shut down.
It sounds like you want this container to spin up, run Gulp, then shut down. If that's the case you can use a CMD line like this (assuming you install gulp globally within the Dockerfile):
CMD ['gulp']
Or maybe you want it to spin up and watch for changes using gulp-watch? In that case, the CMD should be something like this:
CMD ['gulp', 'watch']
If you go with either option, note that Gulp will build the files within the container and not on your host filesystem unless you use a bind mount. A bind mount will allow your host filesystem to share a directory with the container and facilitate one or two-way updates to files.
Ok so I remove the CMD line into the Dockerfile but the container just stopped naturally.
So I added the tty: true option into the docker-compose.yml file in order to keep the container active even if nothing's currently running on it, and for the moment it seems to work :
node:
build: docker/node
environment:
- NODE_ENV=dev
container_name: symfony4-windock-node
tty: true

Google Sheets API v4, How to authenticate from Node.js (Docker Container)

I'm learning how to use Google Sheets API v.4 to download data from a sheet --> my nodeJS server. I'm using Docker containers for my node app. Fails on either localhost or online at server in Docker. It will work fine on localhost, but not in a Docker container. I've whitelisted the IP address at the Google API console. (note: I'm easily able to use firebase API from this node server, not the Google Sheets v4 API)
ref: https://developers.google.com/sheets/api/quickstart/nodejs#step_4_run_the_sample
First time you run the app, the command line on the node server displays:
Authorize this app by visiting this url:
https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets.readonly&response_type=code&client_id=xxx.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob
You go to that URL, and that Google page displays:
Sign in
Please copy this code, switch to your application and paste it there.
4/xxxxxxxxxxxx
And here's the rub. No way will that work. I can copy and paste the 4/xxx token into the command line, but it's a fail. No error message, no nothing. No function either. Is there a way to get there from here? I know this works fine in a stand alone Node server on my desktop computer , but not in a docker container (either localhost or online). Is there a manual method for the authentication?
-----------Edit---------------------------------------------------------
I started looking at the code again, and the issue is a fail on node readline while using a docker container.
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
And that issue already exists here on StackOveflow.
Unable to get standard input inside docker container
duplicate of:
how to get docker container to read from stdin?
You need to run the container in interactive mode with --interactive
or -i:
Whoa... and how do you do that in a docker-compose deployment?
Interactive shell using Docker Compose
Ouch. No go on that posting. Didn't work at all for me.. See the answer provided below...
Info provided here in case anybody else hits this bump in the road.
So it turns out the solution was nowhere near that provided by Interactive shell using Docker Compose
I'm running a node server in a docker container. I wanted to use the terminal to insert a token upon container startup in response to Google sheet API call, using the Node readline method.
Instead the solution I came up with was the result of a note I saw in a docker compose github issue. A long slow read of docker compose functions got me to a better solution. It was as simple as:
$ docker-compose build
$ docker-compose run -p 8080:80 node
One important issue here... the word node is the name of my service as called out in the docker-compose.yml file below. This solution worked fine on both my localhost and at an online server via SSH terminal.
Dockerfile:
FROM node:8
RUN mkdir -p /opt/app
# set our node environment, either development or production
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 80 for node, and 5858 or 9229 for debug
ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229
# install dependencies first, in a different location for easier app bind mounting for local development
WORKDIR /opt
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean --force
ENV PATH /opt/node_modules/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/app
COPY . /opt/app
CMD [ "node", "./bin/www" ]
docker-compose.yml
version: '3.1'
services:
node: <---- Name of service in the container
build:
context: .
args:
- NODE_ENV=development
command: ../node_modules/.bin/nodemon ./bin/www --inspect=0.0.0.0:9229
ports:
- "80:80"
- "5858:5858"
- "9229:9229"
volumes:
- .:/opt/app
# this is a workaround to prevent host node_modules from accidently getting mounted in container
# in case you want to use node/npm both outside container for test/lint etc. and also inside container
# this will overwrite the default node_modules dir in container so it won't conflict with our
# /opt/node_modules location. Thanks to PR from #brnluiz
- notused:/opt/app/node_modules
environment:
- NODE_ENV=development
# tty: true ## tested, not needed
# stdin_open: true ## tested, not needed
volumes:
notused:
Many thanks to Bret Fisher for his work on node docker defaults.

Resources