I have a ReactJS application and I'm deploying it using Kubernetes.
I'm trying to wrap my head around how to inject environment variables into my config.js file from within the Kubernetes deployment file.
I currently have these:
config.js file:
export const CLIENT_API_ENDPOINT = {
default:process.env.URL_TO_SERVICE,
};
and here's my Kubernetes deployment variables:
"spec": {
"containers": [
{
"name": "container_name",
"image": "image_name",
"env": [
{
"name": "URL_TO_SERVICE",
"value": "https://www.myurl.com"
}
]
Kinda clueless of why I can't see the environment variable in my config.js file. Any help would be highly appreciated.
Here's my dockerfile:
# Dockerfile (tag: v3)
FROM node:9.3.0
RUN npm install webpack -g
WORKDIR /tmp
COPY package.json /tmp/
RUN npm config set registry http://registry.npmjs.org/ && npm install
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN cp -a /tmp/node_modules /usr/src/app/
#RUN webpack
ENV NODE_ENV=production
ENV PORT=4000
#CMD [ "/usr/local/bin/node", "./index.js" ]
ENTRYPOINT npm start
EXPOSE 4000
The kubernetes environment variables are available in your container. So you would think the task here is a version of getting server side configuration variables shipped to your client side code.
But, If your react application is running in a container, you are most likely running your javascript build pipeline when you build the docker image. Something like this:
RUN npm run build
# Run app using nodemon
CMD [ "npm", "start" ]
When docker is building your container, the environment variables injected by kubernetes aren't yet yet available. They won't exist until you run the built container on a cluster.
One solution, and this is maybe your shortest path, is to stop building your client side code in the docker file and combine the build and run steps in npm start command . Something like this if you are using webpack:
"start": "webpack -p --progress --config webpack.production.config.js && node index.js"
If you go this route, then you can use any of the well documented techniques for shipping server side environment variables to your client during the build step : Passing environment-dependent variables in webpack. There are similar techniques and tools for all other javascript build tools.
Two: If you are running node, you can continue building your client app in the container, but have the node app write a config.js to the file system on the startup of the node application.
You could do even more complicated things like exposing your config via an api (a variation on the second approach), but this seems like throwing good money after bad.
I wonder if there isn't an easier way. If you have a purely client side app, why not just deploy it as a static site to, say, an amazon or gcloud bucket, firebase, or netlify? This way you just run the build process and deploy to the correct environment. no container needed.
Related
I'm trying do build a docker image of my Node backend for deployment but when I run it in a container and open in the browser I get "This site can’t be reached" error and the following log in dev tools:
crbug/1173575, non-JS module files deprecated
My backend is based on GraphQL Apollo server. Dockerfile is as following:
FROM node:16
WORKDIR /app
COPY ./package*.json ./
RUN npm ci --only=production
# RUN npm install
COPY . .
# RUN npm run build
EXPOSE 4000
CMD [ "node", "dist/main.js" ]
I've also tried to use the commented code, with no result.
The image builds without a problem and after running the container I get 🚀 Server ready at localhost:4000 in the docker logs, so I'd expect it to work properly.
"scripts": {
"build": "tsc",
"start": "node dist/main.js",
"dev": "concurrently \"tsc -w\" \"nodemon dist/main.js\""
},
That's the scripts part of my package.json I've also tried CMD ["npm", "start"] in Dockerfile but that doesn't work either. When I run the backend from terminal using npm start I can access the GraphQL playground at localhost:4000 - I assume that should be the same with docker?
I'm still new to docker so I'd be grateful for any hints. Thanks
EDIT:
I run the container with the following command:
docker run --rm -d -p 4000:80 image-name:latest
Seemingly it's running on port 0.0.0.0:4000 as that's what it says under 'PORT' when I execute docker ps
Please run docker inspect command and you will get IP and then run through that ip in browser
I have NodeJS/TypeScript application (github repo) which is working fine when I run the script defined in package.json. i.e., npm run start will start my local host and I can hit endpoint via POSTMAN.
I have created docker image (I am new to Docker and this is my first image). Here, I am getting Error: connect ECONNREFUSED 127.0.0.1:7001 error in POSTMAN.
I noticed that I do not see Listening on port 7001 message in terminal when I run docker file. This tells me that I am making some mistake in .Dockerfile.
Steps:
I created docker image using docker build -t <IMAGE-NAME> . I can see successfully created image.
I launched container using docker run --name <CONTAINER-NAME> <IMAGE-NAME>
I've also disabled Use the system proxy setting in POSTMAN but no luck.
Details:
Package.json file
"scripts": {
"dev": "ts-node-dev --respawn --pretty --transpile-only src/server.ts",
"compile": "tsc -p .",
"start": "npm run compile && npm run dev"
}
Response from terminal when I run npm run start (This is successful)
Dockerfile
#FROM is the base image for which we will run our application
FROM node:12.0.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
RUN npm install -g typescript
# Expose API port to the outside
EXPOSE 7001
# Launch application
CMD ["npm", "start"]
Response after running docker command
GitHub repo structure
By any chance did you forget to map your container port to the host one?
docker run --name <CONTAINER-NAME> -p 7001:7001 <IMAGE-NAME>
the -p does the trick of exposing the port to your network. The number on the left side is the container port (7001 as exposed on the Dockerfile) and the second one is the target port on the host machine. You can set this up to other available ports as well. Eg.: -p 7001:3000to expose on http://localhost:3000
Check out Docker documentation about networking
Finally, I was able to make this work with two things:
Using #Dante's suggestion (mentioned above).
Updating my .Dockerfile with following:
FROM node:12.0.0
# Change working directory
WORKDIR /user/app
# Copy package.json into the container at /app
COPY package*.json ./
# Install dependencies
RUN npm install
RUN npm install -g typescript
# Copy the current directory contents into the container at root level (in this case, /app directory)
COPY . ./
# Expose API port to the outside
EXPOSE 7001
# Launch application
CMD ["npm", "run", "start"]
I am working on a NestJS project which is executed with docker-compose. Among the many containers that are run by docker-compose there is one container in which the application runs with nodemon (allowing me to debug it if necessary) and another container in which unit tests are executed when changes in the code are detected.
Is there a way to execute the application and to run unit tests on code changes on the same container? Is it good practice? This would allow my machine to execute faster, since the whole set of containers is quite heavy on resources and having just one container to run the application and run unit tests on the fly would let me remove the container used just for the unit tests.
The nodemon config file is this:
{
"watch": ["src"],
"ext": "ts,json",
"ignore": ["src/**/*.spec.ts"],
"exec": "nest build && node --inspect=0.0.0.0 ./dist/main.js"
}
The unit tests in the second container are executed with jest --watch.
I am using one container for both running the app and executing tests. I see no problem with it. Since I'm using sqlite3 for e2e tests my Dockerfile looks like this:
FROM node:12.18.1
RUN apt-get update \
&& apt-get install sqlite3 \
Also in docker-compose.yml my command for this node container is:
command: npm run start:debug-remote
because why not. This npm command is:
"start:debug-remote": "nest start --debug 0.0.0.0:9229 --watch"
In order for the debugger to work you have to expose this port (9229) in docker-compose.yml (or in Dockerfile) and set it in the .vscode/launch.json configuration.
Need an advice to dockerize and run a node JS static-content app on K8s cluster.
I have a static web-content which I run "npm run build” into the terminal which generates /build and direct my IIS webserver to /build/Index.html.
Now, I started creating a Docker file, how do I point my nodeJS image to invoke /build/Index.html file
FROM node:carbon
WORKDIR /app
COPY /Core/* ./app
npm run build
EXPOSE 8080
CMD [ "node", ".app/build/index.html" ]
Please how can I run this app only on node v8.9.3 and
npm 5.6.0 ?
Any inputs please ?
You can specify the version of node specifically:
FROM node:8.9.3
Assumptions:
package.json is under Code directory.
npm run build will be running outside of the container and a build directory will be created in Code directory.
We will copy the whole Code/build directory under /app directory of the container.
We will copy package.json to /app folder and will run the website through scripts available in package.json file.
Solution:
I would say add a script named start in the package.json and call that script from Dockerfile's CMD command. The script would look like:
"scripts": {
"start": "node ./index.html",
},
And the Dockerfile would look like:
FROM node:8.9.3
# Make app directory in the container.
RUN MKDIR /app
# Copy whole code to app directory.
COPY Code/build/ /app
# Copy package.json app directory.
COPY package.json /app
# make app directory as the working directory.
WORKDIR /app
# Install dependencies.
RUN npm install -only=production
# Expose the port
EXPOSE 8080
# Start the process
CMD ["npm", "start"]
I have a reactjs app deployed on AWS Elastic Beanstalk environment that used Cognito for authentication. I need to make my front-end code DEV configurable using environment variables with database and Cognito.
Does anyone know how to achieve that?
I don't think you can read any ENV vars from your client side reactjs app, instead you'll need a server side technology to do that. Elastic Beanstalk lets you enter the environment variables for each environment using the management panel. Add your ENV var and these variables will be attached to the process.env object
const config = {};
config.db = {
database : process.env.DB_HOST || 'your-db-host'
database : process.env.DB_USER || 'your-db-user'
database : process.env.DB_PASSWORD || 'your-db-pwd'
};
export default config;
I am using create-react-app and here is my build script
"build": "sh -ac '. ./.env.${REACT_APP_ENV}; react-scripts build'",
"build:prod": "REACT_APP_ENV=prod npm run-script build",
"build:staging": "REACT_APP_ENV=staging npm run-script build",
It requires you to have files like .env.prod and .env.staging files in your root folder to set the environment variables for their respective environments , you can add other scripts as well for example to add something for local environment i would add
"build:local": "REACT_APP_ENV=local npm run-script build"
In package.json and then add .env.local file in root folder that has my all my local specific env. variables.
Run the build command for CRA as
npm run build:local
(for build with local env. variables)