Can I make docker an environment without saving code - node.js

I'm new to docker and I wonder that, can I use the docker as an application environment only?
I have the Dockerfile which let me build a Docker image and let other team-mates and server able to run my project.
FROM node:10.15.3
ADD . /app/
WORKDIR /app
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
ENV PORT 3000
EXPOSE 3000
CMD ["npm", "run","start"]
The project can be built and ran. All the thing is perfect.
However, I found that all the files will be zip into the image files. My source code and all node_modules. It makes the files too big.
And I remember that in my previous project, I will create the Linux VM and bind my project folder to the guest OS. Then I can keep developing and using the vm as a server.
Can docker do something like this? The docker only needs to load my project folder (which will pass the path when running the command).
Then it runs npm install, npm start/dev. All the library will save into my local directory. OR I run the npm start manually then the docker load my files and host.
I just need docker to be my application server to make sure I can get the same result like deployed to the Production server.
Can Docker do this?
============================== Update ================================
I try to use the bind mount to do this.
Then I create the docker-compose
version: "3.7"
services:
web:
build: .
volumes:
- type: bind
source: C:\myNodeProject
target: /src/
ports:
- '8888:3000'
and I update the dockerfile
FROM node:10.15.3
# Install dependencies
WORKDIR /src/
# I ran 'CMD ls' then confirm that the directory is blinded
# Expose the app port
EXPOSE 3000
# Start the app
CMD yarn dev
and I get the error
web_1 | yarn run v1.13.0
web_1 | $ cross-env NODE_ENV=development nodemon server/index.js --watch server
web_1 | [nodemon] 1.18.11
web_1 | [nodemon] to restart at any time, enter `rs`
web_1 | [nodemon] watching: /src/server/**/*
web_1 | [nodemon] starting `node server/index.js`
web_1 | [nodemon] app crashed - waiting for file changes before starting...
index.js
const express = require('express')
const consola = require('consola')
const { Nuxt, Builder } = require('nuxt')
const app = express()
// Import and Set Nuxt.js options
const config = require('../nuxt.config.js')
config.dev = !(process.env.NODE_ENV === 'production')
async function start() {
// Init Nuxt.js
const nuxt = new Nuxt(config)
const { host, port } = nuxt.options.server
// Build only in dev mode
if (config.dev) {
const builder = new Builder(nuxt)
await builder.build()
} else {
await nuxt.ready()
}
// Give nuxt middleware to express
app.use(nuxt.render)
// Listen the server
app.listen(port, host)
consola.ready({
message: `Server listening on http://${host}:${port}`,
badge: true
})
}
start()

Docker can also work the way you've suggested using Volume Bind from Host OS it's useful in development while you can edit your codes and Docker container can immediately run that code.
However, in production, you don't want to follow the same practice.
Main principles of Docker containers is that an image is immutable
Once you built, it’s unchangeable, and if you want to make changes, you’ll need to build a new image as a result.
And for you're a concern that Docker can load all the necessary dependencies in production same as local them this thing managed by package.lock.json which will make sure whenever someone run npm install it'll install same dependencies.
For production mode, you're Docker Container needs to be lighted weighted so there'll be your code and node_modules and it's good practice to remove npm cache after installation to keep your Docker images size minimum as possible. Keeping size less give less space for security hole and fast deployment.

Related

Environment variables are undefined during Cloud Run Build

I use Google Cloud Run to containerize the node.js app. I added environment variables to the google cloud run by following this guide and expect to use them inside my application code. But. Whenever I run build (cloud run build) it shows me that process.env.NODE_ENV and other enviroenment variables are undefined.
Could you help me to find the root problem of the issue?
Dockerfile
FROM node:14.16.0
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
# Copy local code to the container image.
COPY . ./
RUN yarn install
RUN yarn build
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
# Use the official lightweight Node.js 14 image.
# https://hub.docker.com/_/node
FROM node:14.16.0
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# Copying this first prevents re-running npm install on every code change.
COPY package.json yarn.lock ./
# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN yarn install --production --frozen-lockfile
COPY --from=0 /usr/src/app/dist ./dist
EXPOSE 8080
# Run the web service on container startup.
CMD [ "yarn", "prod" ]
This line of code throws error
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
This is knex.config.ts
import 'dotenv/config'
import { Knex } from 'knex'
import { envConfig, NodeEnvEnum } from '../../configs/env.config'
console.log('ASDASD', process.env.NODE_ENV, envConfig.environment, process.env.CLOUD_SQL_CONNECTION_NAME, envConfig.databaseCloudSqlConnection)
export const knexConfig: Record<NodeEnvEnum, Knex.Config> = {
[NodeEnvEnum.Development]: {
client: 'pg',
connection: envConfig.databaseUrl,
migrations: {
extension: 'ts'
}
},
[NodeEnvEnum.Production]: {
client: 'pg',
connection: {
database: envConfig.databaseName,
user: envConfig.databaseUser,
password: envConfig.databasePassword,
host: `/cloudsql/${envConfig.databaseCloudSqlConnection}`
}
}
}
export default knexConfig
This is env.config.ts
export enum NodeEnvEnum {
Production = 'production',
Development = 'development'
}
interface EnvConfig {
serverPort: string
environment: NodeEnvEnum
// Database
databaseCloudSqlConnection: string
databaseUrl: string
databaseUser: string
databasePassword: string
databaseName: string
}
export const envConfig: EnvConfig = {
serverPort: process.env.SERVER_PORT as string,
environment: process.env.NODE_ENV as NodeEnvEnum,
// Database
databaseUrl: process.env.DATABASE_URL as string,
databaseCloudSqlConnection: process.env.CLOUD_SQL_CONNECTION_NAME as string,
databaseName: process.env.DB_NAME as string,
databaseUser: process.env.DB_USER as string,
databasePassword: process.env.DB_PASSWORD as string
}
Example of the error from the Cloud Run logs
(logs are shown from bottom to top)
You are mixing context here.
There are 3 contexts that you need to be aware of.
The observer that launches the Cloud Build process based on Git push.
The Cloud Build job is triggered by the observer, and it's executed on a sandboxed environment, it's a build process. A step/command fails in this step, because for this context you have not defined the ENV variables.
When the build is finished, it places the image to GCR repository.
Then "the image" is taken and used by Cloud Run as a service, here you define the ENV variables for the service itself, for your application code and not for your build process.
In Context 2, you need to end up using substitution variables read more here and here.
I had the same problem and the cause turned out to be that my .env files weren't getting copied into the Docker container upon deployment. Fixed it by adding .gcloudignore and .dockerignore files in the root of the repository.

Error: connect ECONNREFUSED 0.0.0.0:8000 when hitting Docker containerised Node.js app endpoint

I'm just starting with Docker and dough I succeed in creating an image and a container from it
I'm not succeeding in connecting to the container's port with postman and I get Error: connect ECONNREFUSED 0.0.0.0:8000.
In my server.js file I have:
const app = require('./api/src/app');
const port = process.env.PORT || 3000; // PORT is set to 5000
app.listen(port, () => {
console.log('App executing to port ', port);
});
in my index.js I have :
const express = require('express');
const router = express.Router();
router.get('/api', (req, res) => {
res.status(200).send({
success: 'true',
message: 'Welcome to fixit',
version: '1.0.0',
});
});
module.exports = router;
so if I run my app with either npm start or nodemon server.js the localhost:3000/api endpoint works as expected.
I then build a docker image for my app with the command docker build . -t fixit-server with this Dockerfile:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 5000
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
and run the container with the command docker run -d -p 8000:5000 --name fixit-container fixit-server tail -f /dev/null
and listing the containers with docker ps -a shows it running :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da0e4ef12402 fixit-server "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:8000->5000/tcp fixit-container
but when I hit the endpoint 0.0.0.0:8000/apiI get the ECONNREFUSED error.
I tried both CMD ["npm", "start"]and CMD npm start but I get the error both ways.
Can you what I'm doing wrong?
Update:
#Vincenzo was using docker-machine and to be able to check whether the app was working properly, we needed to execute the following command in the terminal:
docker-machine env
The result was:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.102:2376"
export DOCKER_CERT_PATH="/Users/vinnytwice/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Then based on the DOCKER_HOST value, we hit 192.168.99.102:8000/api and it was working.
I believe the problem is you're never setting the PORT environment variable to 5000.
EXPOSE docker command is a no op. Meaning that it will do nothing but is only for the developer to know that you're exposing the port 5000. You can read it in Docker documentation.
You need to either set an environment variable or pass an environment variable at runtime to the container to specifically tell it that PORT is 5000.
Method 1:
You can change your Dockerfile like below:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=5000
EXPOSE $PORT
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
Method 2:
Simply use the following command to run your container:
docker run -d -p 8000:5000 --name fixit-container --env PORT=5000 fixit-server

Docker container can't find path reference

I'm attempting to run a node.js server with a React frontend using a Docker container on my local Synology NAS. I was able to get the node.js server functioning using this guide.
I then attempted to add the React front end, however I'm getting this error:
ReferenceError: path is not defined ... at /app/lib/app.js:7
app.use(express.static(path.join(__dirname, 'client/build)));
I'm able to run the server locally, so it seems that this would be an issue related to Docker, but I'm not quite sure where to look to resolve the issue.
For reference, the Dockerfile I'm using:
# test using the latest node container
FROM node:latest AS teststep
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY lin ./lib
COPY test ./test
RUN npm ci --development
# test
RUN npm test
# build production packages with the latest node container
FROM node:latest AS buildstep
# Copy in package.json, install
# and build all node modules
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm ci --production
# This is our runtime container that will end up
# running on the device.
FROM node:alpine
WORKDIR /app
# Copy our node_modules into our deployable container context.
COPY --from=buildstep /app/node_modules node_modules
COPY lib ./lib
# Launch our App.
CMD ["node", "lib/app.js"]
App.js:
const express = require('express')
const app = express()
const path = require('path');
const port = 3000
app.use(express.static(path.join(__dirname, 'client/build')));
app.get('/', function(req, res) {
res.sendFile(path.join(__dirname, 'client/build', 'index.html'));
});
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
The problem was fixed by resolving the "lin" typo, then deleting the existing container and executing the run.sh script.

Node watch on a docker-compose volume does not register delete events

I have a simple Node.js HTTP server running inside a docker container. One of the basic structural folders uses volume from docker-compose.yml to mirror the host and container directory.
Within the Node server, I have a watcher set up to track changes within the volumed directory, using the NPM package chokidar (although I have tried multiple other watchers already with the same result).
const watcher = require("chokidar");
watcher
.watch("./app/experiments", { depth: 0, ignoreInitial: true })
.on("all", (event, path) => {
console.log(event);
// ... DO SOME EXPRESS AND WEBPACK STUFF
});
When I run the Node server locally, the watcher correctly picks up changes to the watched directory. In this case, chokidar is reporting these as addDir or unlinkDir, which correspond to a scaffolding script I run to add or remove new folders into the directory (which is served later via express.static()).
STDOUT:
> addDir
> EXPERIMENT ADDED!
> ...
> unlinkDir
> EXPERIMENT DELETED!
However, when I port the application into a docker container, the behavior changes in a really strange way. I continue to get addDir events when I create new folders in the volumed directory, but I no longer receive unlinkDir (delete) events!. Note that this only happens if I add / delete a file within the volumed directory on the host machine. If I add / delete a file within that directory inside the docker container, my watcher correctly reports all of these events.
In either case, the volumed directory correctly mirrors itself. E.G. the files are actually deleted or added, and I can verify their existence on the host and by shelling into the docker container and running ls.
Any docker geniuses out there with sage wisdom on why this is happening?
KEY STUFF:
OS X 10.13.6
Docker Toolbox:
Docker 18.03.0-ce
docker-maching 0.14.0
docker-compose 1.20.1
virtualbox 5.2.18r124319
Dockerfile:
FROM node:8.12.0
WORKDIR /usr/dd-labs
COPY package*.json ./
RUN npm install
COPY app/ ./app
COPY server.js ./
COPY webpack/ ./webpack
EXPOSE 8080
Docker-compose.yml:
version: "2"
services:
app:
image: #someImageName
build: .
ports:
- "8080:8080"
labels:
io.rancher.container.pull_image: always
environment:
VIRTUAL_HOST: labs.docker
volumes:
- ./app:/usr/dd-labs/app
command: [sh, -c, "npm run start:dev"]
You are probably using Docker for Windows which has a very well known lack of support for file system events propagation from host to containers.
A work around is to use polling when in dev environment. With chokidar you'd want usePolling: true option.

How can I run Ghost in Docker with the google/node-runtime image?

I'm very new to Docker, Ghost and node really, so excuse any blatant ignorance here.
I'm trying to set up a Docker image/container for Ghost based on the google/nodejs-runtime image, but can't connect to the server when I run via Docker.
A few details: I'm on OS X, so using I'm boot2docker. I'm running Ghost as a npm module, configured to use port 8080 because that's what google/nodejs-runtime expects. This configuration runs fine outside of Docker when I use npm start. I also tried a simple "Hello, World" Express app on port 8080 which works from within Docker.
My directory structure looks like this:
my_app
content/
Dockerfile
ghost_config.js
package.json
server.js
package.json
{
"name": "my_app",
"private": true,
"dependencies": {
"ghost": "0.5.2",
"express": "3.x"
}
}
Dockerfile
FROM google/nodejs-runtime
ghost_config.js
I changed all occurrences of port 2368 to 8080.
server.js
// This Ghost server works with npm start, but not with Docker
var ghost = require('ghost');
var path = require('path');
ghost({
config: path.join(__dirname, 'ghost_config.js')
}).then(function (ghostServer) {
ghostServer.start();
});
// This "Hello World" app works in Docker
// var express = require('express');
// var app = express();
// app.get('/', function(req, res) {
// res.send('Hello World');
// });
// var server = app.listen(8080, function() {
// console.log('Listening on port %d', server.address().port);
// });
I build my Docker image with docker build -t my_app ., then run it with docker run -p 8080 my_app, which prints this to the console:
> my_app# start /app
> node server.js
Migrations: Up to date at version 003
Ghost is running in development...
Listening on 127.0.0.1:8080
Url configured as: http://localhost:8080
Ctrl+C to shut down
docker ps outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f4c7027f62f my_app:latest "/nodejs/bin/npm sta 23 hours ago Up About a minute 0.0.0.0:49165->8080/tcp pensive_lovelace
And boot2docker ip outputs:
The VM's Host only interface IP address is: 192.168.59.103
So I point my browser at: 192.168.59.103:49165 and get nothing, an no output in the Docker logs. Like I said above, running the "Hello World" app in the same server.js works fine.
Everything looks correct to me. The only odd thing that I see is that sqlite3 fails in npm install during docker build:
[sqlite3] Command failed:
module.js:356
Module._extensions[extension](this, filename);
^
Error: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found
...
node-pre-gyp ERR! Testing pre-built binary failed, attempting to source compile
but the source compile appears to succeed just fine.
I hope I'm just doing something silly here.
In your ghost config, change the related server host to 0.0.0.0 instead of 127.0.0.1:
server: {
host: '0.0.0.0',
...
}
PS: for the SQLite error. Try this Dockerfile:
FROM phusion/baseimage:latest
# Set correct environment variables.
ENV HOME /root
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# ...put your own build instructions here...
# Install Node.js and npm
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
# Copy Project Files
RUN mkdir /root/webapp
WORKDIR /root/webapp
COPY app /root/webapp/app
COPY package.json /root/webapp/
RUN npm install
# Add runit service for Node.js app
RUN mkdir /etc/service/webapp
ADD deploy/runit/webapp.sh /etc/service/webapp/run
RUN chmod +x /etc/service/webapp/run
# Add syslog-ng Logentries config file
ADD deploy/syslog-ng/logentries.conf /etc/syslog-ng/conf.d/logentries.conf
# Expose Ghost port
EXPOSE 2368
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Note I used phusion/baseimage instead of google/nodejs-runtime and installed node.js & npm with:
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
In your Dockerfile, you need this command EXPOSE 8080.
But that only makes that port accessible outside the Docker container. When you run a container from that image you need to 'map' that port. For example:
$ docker run -d -t -p 80:8080 <imagename>
The -p 80:8080 directs port '8080' in the container to port '80' on the outside when it is running.
The syntax always confuses me (I think it is backwards).

Resources