I use Google Cloud Run to containerize the node.js app. I added environment variables to the google cloud run by following this guide and expect to use them inside my application code. But. Whenever I run build (cloud run build) it shows me that process.env.NODE_ENV and other enviroenment variables are undefined.
Could you help me to find the root problem of the issue?
Dockerfile
FROM node:14.16.0
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
# Copy local code to the container image.
COPY . ./
RUN yarn install
RUN yarn build
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
# Use the official lightweight Node.js 14 image.
# https://hub.docker.com/_/node
FROM node:14.16.0
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# Copying this first prevents re-running npm install on every code change.
COPY package.json yarn.lock ./
# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN yarn install --production --frozen-lockfile
COPY --from=0 /usr/src/app/dist ./dist
EXPOSE 8080
# Run the web service on container startup.
CMD [ "yarn", "prod" ]
This line of code throws error
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
This is knex.config.ts
import 'dotenv/config'
import { Knex } from 'knex'
import { envConfig, NodeEnvEnum } from '../../configs/env.config'
console.log('ASDASD', process.env.NODE_ENV, envConfig.environment, process.env.CLOUD_SQL_CONNECTION_NAME, envConfig.databaseCloudSqlConnection)
export const knexConfig: Record<NodeEnvEnum, Knex.Config> = {
[NodeEnvEnum.Development]: {
client: 'pg',
connection: envConfig.databaseUrl,
migrations: {
extension: 'ts'
}
},
[NodeEnvEnum.Production]: {
client: 'pg',
connection: {
database: envConfig.databaseName,
user: envConfig.databaseUser,
password: envConfig.databasePassword,
host: `/cloudsql/${envConfig.databaseCloudSqlConnection}`
}
}
}
export default knexConfig
This is env.config.ts
export enum NodeEnvEnum {
Production = 'production',
Development = 'development'
}
interface EnvConfig {
serverPort: string
environment: NodeEnvEnum
// Database
databaseCloudSqlConnection: string
databaseUrl: string
databaseUser: string
databasePassword: string
databaseName: string
}
export const envConfig: EnvConfig = {
serverPort: process.env.SERVER_PORT as string,
environment: process.env.NODE_ENV as NodeEnvEnum,
// Database
databaseUrl: process.env.DATABASE_URL as string,
databaseCloudSqlConnection: process.env.CLOUD_SQL_CONNECTION_NAME as string,
databaseName: process.env.DB_NAME as string,
databaseUser: process.env.DB_USER as string,
databasePassword: process.env.DB_PASSWORD as string
}
Example of the error from the Cloud Run logs
(logs are shown from bottom to top)
You are mixing context here.
There are 3 contexts that you need to be aware of.
The observer that launches the Cloud Build process based on Git push.
The Cloud Build job is triggered by the observer, and it's executed on a sandboxed environment, it's a build process. A step/command fails in this step, because for this context you have not defined the ENV variables.
When the build is finished, it places the image to GCR repository.
Then "the image" is taken and used by Cloud Run as a service, here you define the ENV variables for the service itself, for your application code and not for your build process.
In Context 2, you need to end up using substitution variables read more here and here.
I had the same problem and the cause turned out to be that my .env files weren't getting copied into the Docker container upon deployment. Fixed it by adding .gcloudignore and .dockerignore files in the root of the repository.
Related
I'm just starting with Docker and dough I succeed in creating an image and a container from it
I'm not succeeding in connecting to the container's port with postman and I get Error: connect ECONNREFUSED 0.0.0.0:8000.
In my server.js file I have:
const app = require('./api/src/app');
const port = process.env.PORT || 3000; // PORT is set to 5000
app.listen(port, () => {
console.log('App executing to port ', port);
});
in my index.js I have :
const express = require('express');
const router = express.Router();
router.get('/api', (req, res) => {
res.status(200).send({
success: 'true',
message: 'Welcome to fixit',
version: '1.0.0',
});
});
module.exports = router;
so if I run my app with either npm start or nodemon server.js the localhost:3000/api endpoint works as expected.
I then build a docker image for my app with the command docker build . -t fixit-server with this Dockerfile:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 5000
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
and run the container with the command docker run -d -p 8000:5000 --name fixit-container fixit-server tail -f /dev/null
and listing the containers with docker ps -a shows it running :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da0e4ef12402 fixit-server "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:8000->5000/tcp fixit-container
but when I hit the endpoint 0.0.0.0:8000/apiI get the ECONNREFUSED error.
I tried both CMD ["npm", "start"]and CMD npm start but I get the error both ways.
Can you what I'm doing wrong?
Update:
#Vincenzo was using docker-machine and to be able to check whether the app was working properly, we needed to execute the following command in the terminal:
docker-machine env
The result was:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.102:2376"
export DOCKER_CERT_PATH="/Users/vinnytwice/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Then based on the DOCKER_HOST value, we hit 192.168.99.102:8000/api and it was working.
I believe the problem is you're never setting the PORT environment variable to 5000.
EXPOSE docker command is a no op. Meaning that it will do nothing but is only for the developer to know that you're exposing the port 5000. You can read it in Docker documentation.
You need to either set an environment variable or pass an environment variable at runtime to the container to specifically tell it that PORT is 5000.
Method 1:
You can change your Dockerfile like below:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=5000
EXPOSE $PORT
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
Method 2:
Simply use the following command to run your container:
docker run -d -p 8000:5000 --name fixit-container --env PORT=5000 fixit-server
I am literally new with AWS as well and Containerization technology. What I am trying to achieve is that deploying a node image to AWS.
AS I am working with NESTJS my main.ts bootstrap method
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.setGlobalPrefix('api/v1');
await app.listen(5000);
Logger.log(`Server is running on port ${port}`, "Bootstrap");
}
bootstrap();
I am also using Travis CI to ship my container to AWS
My Docker file
# Download base image
FROM node:alpine as builder
# Define Base Directory
WORKDIR /usr/app/Api
# Copy and restore packages
COPY ./package*.json ./
RUN npm install
# Copy all other directories
COPY ./ ./
# Setup base command
CMD ["npm", "run", "start"]
MY .travis.yml file --> which is the config of Travis CI
sudo: required
services:
- docker
branches:
only:
- master
before_install:
- docker build -t xx/api .
script:
- docker run xx/api npm run test
deploy:
provider: elasticbeanstalk
region: "us-east-2"
app: "api"
env: "api-env"
bucket_name: "name"
bucket_path: "api"
on:
branch: master
access_key_id: "$AWS_ACCESS_KEY"
secret_access_key: "$AWS_SECRET_KEY"
Every time code pushed from Travis CI my Elastic beanstalk start building and failed.
So, I start googling to solve the issue. What I could is that I need to expose port using NGINX. Expost 80 PORT
FROM Nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My question is how should I incorporate NGINX to my docker file? AS my application is not something static content. If I move all my build artefacts to /usr/share/nginx/html. This will simply not work as I am not serving static content. So What I need is that I simultaneously run my server to server node app as well as run another container with NGINX which will export 80 port and proxy my requests.
How should I do that? Any help?
I'm pretty new to do docker and jenkins but wanted to see if I could get a node app automatically deployed and running on my raspberry pi. In an ideal world, I'd like to have Jenkins pull down code from github, use a jenkinsfile and dockerfile to build and run the docker image (hopefully this is possible).
jenkinsfile
pipeline {
agent {
dockerfile true
}
environment {
CI = 'true'
HOME = '.'
}
stages {
stage('Install dependencies') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh './scripts/test'
}
}
stage('Build Container') {
steps {
sh 'docker build -t test-app:${BUILD_NUMBER} . '
}
}
}
}
dockerfile
# Create image based on the official Node image
FROM node:12
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
However, when I try to run this in jenkins, I get the following error: ../script.sh: docker: not found. This seems to be the case for any docker command. I actually tried running some other command starting with 'sudo' and it complained that sudo: not found. Is there a step missing or am I trying to do something in an incorrect way. (NOTE: docker is installed on the raspberry pi. I can log in with the jenkins user and execute docker commands. It just doesn't work through the web ui) Any advice would be appreciated.
Thanks!
Apparently this section was breaking it:
agent {
dockerfile true
}
When I set this:
agent any
it finished the build, including docker commands without any issues. I guess I just don't understand how that piece works. Any explanations would be helpful!
I'm getting used to with Docker. Here is my current code in DockerFile:
FROM node:12-alpine AS builder
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV}
RUN npm run build
CMD ["sh","-c","./start-docker.sh ${NODE_ENV}"]
And I'm using pm2 to manage cluster in Nodejs, here is my start-docker.sh:
NODE_PATH=. pm2-runtime ./ecosystem.config.js --env $NODE_ENV
In my ecosystem.config.js, I define an env:
env_dev: {
NODE_ENV: 'development'
}
Everything is oke, but on my server, the NODE_ENV=''. I think there is something wrong when I pass in my CMD but can not find out what's wrong
Okay in my mind there is another way to do this, please try this way. this will not be actual code, it will just be an idea.
ecosystem.config.js
module.exports = {
apps : [{
name: "app",
script: "./app.js",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
}
}]
}
And your dockerfile
dockerfile
FROM node:12-alpine
RUN npm run build
CMD ["pm2","start","ecosystem.config.js"]
As described in PM2 CLI documentation you just need to run command to start the application using the command pm2 start ecosystem.config.js this is automatically accessing the ENV variable described in ecosystem.config.js
https://pm2.keymetrics.io/docs/usage/application-declaration/#cli
Please try this, you might have new problems, but hope problems with some error logs, so that we can debug more. But I am sure that this could work and resolve your problem
I'm new to docker and I wonder that, can I use the docker as an application environment only?
I have the Dockerfile which let me build a Docker image and let other team-mates and server able to run my project.
FROM node:10.15.3
ADD . /app/
WORKDIR /app
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
ENV PORT 3000
EXPOSE 3000
CMD ["npm", "run","start"]
The project can be built and ran. All the thing is perfect.
However, I found that all the files will be zip into the image files. My source code and all node_modules. It makes the files too big.
And I remember that in my previous project, I will create the Linux VM and bind my project folder to the guest OS. Then I can keep developing and using the vm as a server.
Can docker do something like this? The docker only needs to load my project folder (which will pass the path when running the command).
Then it runs npm install, npm start/dev. All the library will save into my local directory. OR I run the npm start manually then the docker load my files and host.
I just need docker to be my application server to make sure I can get the same result like deployed to the Production server.
Can Docker do this?
============================== Update ================================
I try to use the bind mount to do this.
Then I create the docker-compose
version: "3.7"
services:
web:
build: .
volumes:
- type: bind
source: C:\myNodeProject
target: /src/
ports:
- '8888:3000'
and I update the dockerfile
FROM node:10.15.3
# Install dependencies
WORKDIR /src/
# I ran 'CMD ls' then confirm that the directory is blinded
# Expose the app port
EXPOSE 3000
# Start the app
CMD yarn dev
and I get the error
web_1 | yarn run v1.13.0
web_1 | $ cross-env NODE_ENV=development nodemon server/index.js --watch server
web_1 | [nodemon] 1.18.11
web_1 | [nodemon] to restart at any time, enter `rs`
web_1 | [nodemon] watching: /src/server/**/*
web_1 | [nodemon] starting `node server/index.js`
web_1 | [nodemon] app crashed - waiting for file changes before starting...
index.js
const express = require('express')
const consola = require('consola')
const { Nuxt, Builder } = require('nuxt')
const app = express()
// Import and Set Nuxt.js options
const config = require('../nuxt.config.js')
config.dev = !(process.env.NODE_ENV === 'production')
async function start() {
// Init Nuxt.js
const nuxt = new Nuxt(config)
const { host, port } = nuxt.options.server
// Build only in dev mode
if (config.dev) {
const builder = new Builder(nuxt)
await builder.build()
} else {
await nuxt.ready()
}
// Give nuxt middleware to express
app.use(nuxt.render)
// Listen the server
app.listen(port, host)
consola.ready({
message: `Server listening on http://${host}:${port}`,
badge: true
})
}
start()
Docker can also work the way you've suggested using Volume Bind from Host OS it's useful in development while you can edit your codes and Docker container can immediately run that code.
However, in production, you don't want to follow the same practice.
Main principles of Docker containers is that an image is immutable
Once you built, it’s unchangeable, and if you want to make changes, you’ll need to build a new image as a result.
And for you're a concern that Docker can load all the necessary dependencies in production same as local them this thing managed by package.lock.json which will make sure whenever someone run npm install it'll install same dependencies.
For production mode, you're Docker Container needs to be lighted weighted so there'll be your code and node_modules and it's good practice to remove npm cache after installation to keep your Docker images size minimum as possible. Keeping size less give less space for security hole and fast deployment.