I have a script in nodejs with webpack where I want to use environment variables from docker-compose but every times the variables is undefined.
this is a little piece of docker-compose:
container:
image: "node:8-alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./project:/home/node/app
- ./conf:/home/node/conf
command: "yarn start"
I have this webpack configuration:
const path = require('path');
const TerserPlugin = require('terser-webpack-plugin');
const DefinePlugin = require('webpack').DefinePlugin;
module.exports = {
entry: './src-js/widget.js',
mode: process.env.NODE_ENV || 'development',
output: {
filename: 'widget.js',
path: path.resolve(__dirname, 'public')
},
optimization: {
minimizer: [new TerserPlugin()]
},
plugins: [
new DefinePlugin({
ENV: JSON.stringify(process.env.NODE_ENV)
})
]
};
Into my node script I would like to use NODE_ENV variables, so I have tried all this solutions but every time is undefined
console.log(process.env.NODE_ENV);
console.log(ENV);
console.log(process.env); //is empty
From the docker container I have tried to print environment variables and inside it there is NODE_ENV but I can't use it into node file. Why?
Usually I use yarn build or yarn watch to recompile it
Try this in your webpack configuration:
new DefinePlugin({
'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV)
})
Additionally, official docs have a snippet about it.
Related
I'm trying to get Docker working with Gulp / Browsersync and having a lot of trouble. My docker-compose file:
# compose file for local development build
version: "3"
services:
web:
build:
context: ./webapp
args:
- NODE_ENV=development
environment:
- PORT=8080
command: npm run start-dev
restart: "no"
volumes:
- ./webapp:/app
- /app/node_modules/
ports:
- "8080:8080"
- "7000:7000"
The command runs a script in my package.json which is gulp & nodemon start.js. In theory, this should start gulp and nodemon simultaneously. I see them both starting up in the terminal, but changes to my watched files do not trigger an update.
The Dockerfile I'm referencing in the compose file is as such, which essentially just copies my app over and installs gulp globally (NODE_ENV is set to develpoment by docker-compose and the start command is overwritten to npm run start-dev):
FROM node:8.16-alpine
# env setup
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /app
COPY . /app
# install npm packages
RUN npm install
RUN npm -g install nodemon#^1.19.1
RUN npm -g install gulp#^4.0.2
# compile scss (runs with browsersync in dev)
RUN if [ ${NODE_ENV} != "development" ]; then gulp; fi
# not used by heroku
EXPOSE 8080
CMD ["npm", "start"]
My gulpfile (this is where I have the least understanding as it was written by someone else). My guess is something that is has something to do with the proxy, but from what I've seen in other guides it should be correct.
const gulp = require('gulp')
const sass = require('gulp-sass')
const scsslint = require('gulp-scss-lint')
const size = require('gulp-size')
const csso = require('gulp-csso')
const autoprefixer = require('gulp-autoprefixer')
const browserSync = require('browser-sync')
const plumber = require('gulp-plumber')
const reload = browserSync.reload
const AUTOPREFIXER_BROWSERS = [
'ie >= 10',
'ie_mob >= 10',
'ff >= 30',
'chrome >= 34',
'safari >= 7',
'opera >= 23',
'ios >= 7',
'android >= 4.4',
'bb >= 10'
]
const SOURCE = {
scss: 'scss/**/*.scss',
css: 'public/css',
nunjucks: 'views/**/*.nunjucks',
html: '*.html',
js: ['/*.js', 'public/js/*.js']
}
// browser-sync task for starting the server.
gulp.task('browser-sync', function() {
browserSync({
proxy: "web:8080",
files: ["public/**/*.*"],
browser: "google chrome",
port: 7000
})
})
gulp.task('scss-lint', function(done) {
gulp.src('/' + SOURCE.scss)
.pipe(scsslint())
done()
})
// Compile, lint, and automatically prefix stylesheets
gulp.task('sass', gulp.series('scss-lint', function(done) {
let res = gulp.src(SOURCE.scss)
.pipe(plumber())
.pipe(sass({
includePaths: ['node_modules/', 'public/lib/chartist-js/dist/scss/']
}))
.pipe(autoprefixer({
browsers: AUTOPREFIXER_BROWSERS
}))
.pipe(csso(SOURCE.css))
.pipe(gulp.dest(SOURCE.css))
.pipe(size({
title: 'CSS: '
}))
// livereload for development
if (process.env.NODE_ENV === 'development') {
res.pipe(reload({
stream: true
}))
}
done()
})
)
gulp.task('bs-reload', function() {
browserSync.reload()
})
// default task to be run with `gulp`
if (process.env.NODE_ENV === 'development') {
// compile and start browsersync
gulp.task('default', gulp.series('sass', 'browser-sync', function(done) {
gulp.watch(SOURCE.scss, ['sass'])
gulp.watch([SOURCE.js, SOURCE.nunjucks, SOURCE.html], ['bs-reload'])
done()
}))
}
else {
// just compile
gulp.task('default', gulp.series('sass'))
}
module.exports = gulp
I am re-re-re-reading the docs on environment variables and am a bit confused.
MWE repo: https://gitlab.com/SumNeuron/docker-nf
I made a plugin /plugins/axios.js which creates a custom axios instance:
import axios from 'axios'
const apiVersion = 'v0'
const api = axios.create({
baseURL: `${process.env.PUBLIC_API_URL}/api/${apiVersion}/`
})
export default api
and accordingly added it to nuxt.config.js
import colors from 'vuetify/es5/util/colors'
import bodyParser from 'body-parser'
import session from 'express-session'
console.log(process.env.PUBLIC_API_URL)
export default {
mode: 'spa',
env: {
PUBLIC_API_URL: process.env.PUBLIC_API_URL || 'http://localhost:6091'
},
// ...
plugins: [
//...
'#/plugins/axios.js'
]
}
I set PUBLIC_API_URL to http://localhost:9061 in the .env file. Oddly, the log statement is correct (port 9061) but when trying to reach the site there is an api call to port 6091 (the fallback)
System setup
project/
|-- backend (flask api)
|-- frontend (npx create-nuxt-app frontend)
|-- assets/
|-- ...
|-- plugins/
|-- axios.js
|-- restriced_pages
|-- index.js (see other notes 3)
|-- ...
|-- nuxt.config.js
|-- Dockerfile
|-- .env
|-- docker-compose.yml
Docker
docker-compose.yml
version: '3'
services:
nuxt: # frontend
image: frontend
container_name: my_nuxt
build:
context: .
dockerfile: ./frontend/Dockerfile
restart: always
ports:
- "3000:3000"
command: "npm run start"
environment:
- HOST
- PUBLIC_API_URL
flask: # backend
image: backend
container_name: my_flask
build:
context: .
dockerfile: ./backend/Dockerfile
command: bash deploy.sh
environment:
- REDIS_URL
- PYTHONPATH
ports:
- "9061:9061"
expose:
- '9061'
depends_on:
- redis
worker:
image: backend
container_name: my_worker
command: python3 manage.py runworker
depends_on:
- redis
environment:
- REDIS_URL
- PYTHONPATH
redis: # for workers
container_name: my_redis
image: redis:5.0.3-alpine
expose:
- '6379'
Dockerfile
FROM node:10.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
COPY ./frontend ${APP_ROOT}
RUN npm install
RUN npm run build
Other notes:
The reason the site fails to load is because the new axios plugin (#/plugins/axios.js) makes a weird call xhr call when the page is loaded, triggered by commons.app.js line 464. I do not know why, this call is no where explicitly in my code.
I see this warning:
WARN Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.
I do not know what caused it or how to correct it
I have a "restricted" page:
// Create express router
const router = express.Router()
// Transform req & res to have the same API as express
// So we can use res.status() & res.json()
const app = express()
router.use((req, res, next) => {
Object.setPrototypeOf(req, app.request)
Object.setPrototypeOf(res, app.response)
req.res = res
res.req = req
next()
})
// Add POST - /api/login
router.post('/login', (req, res) => {
if (req.body.username === username && req.body.password === password) {
req.session.authUser = { username }
return res.json({ username })
}
res.status(401).json({ message: 'Bad credentials' })
})
// Add POST - /api/logout
router.post('/logout', (req, res) => {
delete req.session.authUser
res.json({ ok: true })
})
// Export the server middleware
export default {
path: '/restricted_pages',
handler: router
}
which is configured in nuxt.config.js as
serverMiddleware: [
// body-parser middleware
bodyParser.json(),
// session middleware
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: { maxAge: 60000 }
}),
// Api middleware
// We add /restricted_pages/login & /restricted_pages/logout routes
'#/restricted_pages'
],
which uses the default axios module:
//store/index.js
import axios from 'axios'
import api from '#/plugins/axios.js'
//...
const actions = {
async login(...) {
// ....
await axios.post('/restricted_pages/login', { username, password })
// ....
}
}
// ...
As you are working in SPA mode, you need your environment variables to be available during build time.
The $ docker run command is therefore already too late to define these variables, and that is what you are doing with your docker-compose's 'environment' key.
So what you need to do to make these variables available during buildtime is to define them in your Dockerfile with ENV PUBLIC_API_URL http://localhost:9061. However, if you want them to be defined by your docker-compose, you need to pass them as build args. I.e. in your docker-compose :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: http://localhost:9061
and in your Dockerfile, you catch that arg and pass it to your build environment like so :
ARG PUBLIC_API_URL
ENV PUBLIC_API_URL ${PUBLIC_API_URL}
If you don't want to define the variable's value directly in your docker-compose, but rather use locally (i.e. on the machine you're lauching the docker-compose command) defined environment variables (for instance with shell $ export PUBLIC_API_URL=http://localhost:9061), you can reference it as you would in a subsequent shell command, so your docker-compose ends up like this :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: ${PUBLIC_API_URL}
The Nuxt RuntimeConfig properties can be used instead of the env configuration:
https://nuxtjs.org/docs/2.x/directory-structure/nuxt-config#publicruntimeconfig
publicRuntimeConfig is available as $config for client and server side and can contain runtime environment variables by configuring it in nuxt.config.js:
export default {
...
publicRuntimeConfig: {
myEnv: process.env.MYENV || 'my-default-value',
},
...
}
Use it in your components like so:
// within <template>
{{ $config.myEnv }}
// within <script>
this.$config.myEnv
See also this blog post for further information.
I am trying to set the environment variable NODE_ENV for my project.
I am using Windows and have set the NODE_ENV in the system settings - this has been verified by typing SET and identifying for the row below in the output.
NODE_ENV=production
I cannot seem to get the variable to set in webpack though.
When adding the code below to my project (index.js) it only logs out undefined
console.log('PROCESS', process.env.NODE_ENV)
My webpack config:
const path = require('path');
const webpack = require('webpack');
const ExtractTextPlugin = require('extract-text-webpack-plugin');
const UglifyJSPlugin = require('uglifyjs-webpack-plugin')
process.env.NODE_ENV = process.env.NODE_ENV || 'development'
if (process.env.NODE_ENV === 'test') {
require('dotenv').config({ path: '.env.test' })
} else if (process.env.NODE_ENV === 'development') {
require('dotenv').config({ path: '.env.development' })
} else if (process.env.NODE_ENV === 'production') {
require('dotenv').config({ path: '.env.production' })
} else {
require('dotenv').config({ path: '.env.development' })
}
module.exports = (env) => {
const isProduction = env === 'production';
...
plugins: [
CSSExtract,
new UglifyJSPlugin(),
new webpack.DefinePlugin({
'process.env.FIREBASE_API_KEY': JSON.stringify(process.env.FIREBASE_API_KEY),
'process.env.FIREBASE_AUTH_DOMAIN': JSON.stringify(process.env.FIREBASE_AUTH_DOMAIN),
'process.env.FIREBASE_DATABASE_URL': JSON.stringify(process.env.FIREBASE_DATABASE_URL),
...
}),
devtool: isProduction ? 'source-map' : 'inline-source-map',
...
I have read this question, but still cannot get the env variable to set.
Where am I going wrong?
I managed to get set the NODE_ENV by using the cross-env package!
If you are developing node.js on Windows this can be very useful. Linux/Mac users would not have this problem.
To set the environment variable simply type
cross-env NODE_ENV=production [your commend goes here]
Example:
cross-env NODE_ENV=production webpack --env production
I am using gulp in my reactjs solution and want to set the NODE_ENV, I tried this:
gulp.task('set-dev-node-env', function() {
return process.env.NODE_ENV = 'development';
});
//dev environment run
gulp.task('default', ['set-dev-node-env', 'webpack-dev-server']);
When I run gulp and check the chromeconsole it says:
testing process.env.NODE_ENV undefined
What am I missing or how can I set this NODE_ENV variable? I would like to set it in the gulpfile somehow.
Here is the complete project: github
Since you are using WebPack you can use the WebPack Define Plugin to inject the NODE_ENV
For example in your webpack.config.dev.js:
module.exports = {
...
plugins: [
...
new webpack.DefinePlugin({
'process.env.NODE_ENV': JSON.stringify('development'),
}),
],
};
Then in your react code you will have access to it NODE_ENV you set as process.env.NODE_ENV.
You can try this using gulp-env module
// gulpfile.js
var gulp = require('gulp');
var env = require('gulp-env');
gulp.task('set-dev-node-env', function () {
env({
vars: {
NODE_ENV: "development"
}
})
});
gulp.task('default', ['set-dev-node-env', 'webpack-dev-server'])
How to start node through gulp-nodemon with a flag?
gulp.task('default', function() {
// listen for changes
livereload.listen();
// configure nodemon
nodemon({
// the script to run the app
"script": 'server.js',
"ignore": ["*.test.js", "logs/*"],
"ext": 'js',
env: { 'NODE_ENV': 'development', 'DEBUG':'*' },
}).on('restart', function(){
// when the app has restarted, run livereload.
gulp.src('server.js')
.pipe(livereload())
.pipe(notify('Reloading page, please wait...'));
})
})
I'd like to start it with the flag DEBUG=* for using it with the debug library. However its not accepting it through env. Adding it after the script name results in an error.
How do I add a flag to nodemon within a gulp script?
Take a look at this issue. It is suggested there to set stdout: true or to use the exec option, i.e.:
nodemon({
...
exec: 'node DEBUG=*'
});
If this does not work you can try the options args and nodeArgs shown in this issue, i.e.:
nodemon({
...
args: ['DEBUG=*']
});
I did not try these approaches myself but I hope they will help.
Debug is not any more using a flag like you can read it in several blog posts. It now uses an env variable instead.
nodemon({
"script": 'server.js',
"ignore": ["*.test.js", "logs/*"],
"ext": 'js',
"env": { 'NODE_ENV': 'development', 'DEBUG':'*' },
})
or by running it with env var on shell
DEBUG=* node server.js