So I write a Twitch Chat Bot. Dockerized (docker compose), Node.js v16 with express.
For my authorize-page someone can use to authorize my bot on Twitch API I used the route /auth/request like
this.serverUrl = serverUrl;
this.port = port;
this.app = express();
this.app.use(express.static(__dirname + '/frontend/'));
//Landingpage to authorize App for channel
this.app.get('/auth/request/', (req:any, res:any) => {
console.log('/');
var indexhtml = new Replacer().replace(__dirname + '/frontend/auth/request/index.html', '%SERVER_URL%', this.serverUrl);
res.send(indexhtml);
});
(I am using '%SERVER_URL%' as a placeholer that gets replaced by my localhost or domain-adress.)
First time, there was error, replacing the string and twitch API respondet an error, of course.
But after this, I was not able to change the behavior of the route anymore. Further more, It was still available aber commenting it completely out.
Serveral restarts did not help. Even with
docker-compose up --build --force-recreate
I put the route back in, fixed the error an changed the route to "/". I wanted to do that anyway. Here it works fine but at the old route, it is still available with the replace error.
I thougt of some kind of weird deamon services, that are still running, but thats not a thing since it is not available when the container is not running.
I have no further ideas...
How can I get rid of this annoying route? It should not exist anymore.
docker-compose.yml
version: '0.1'
services:
node:
container_name: sacrificulus
build: ./app
ports:
- "3000:3000"
volumes:
- D:\Projects\WebProjects\AlfredServes\app:/app/token_store
command: ["./node_modules/.bin/ts-node", "./src/bot.ts"]
Dockerfile
FROM node:16
WORKDIR /app
COPY . /app
ENV TWITCH_CLIENT_ID=12345mytwitchclientid54321
ENV URL_LIVE=https://bot.example.com
ENV PORT_LIVE=80
ENV URL_LOCAL=http://localhost:
ENV PORT_LOCAL=3000
ENV LIVE_OR_LOCAL=local
#ENV LIVE_OR_LOCAL=live
RUN npm install
Did anyone had a similar bahavior?
( Sorry for my code quality :D )
sorry from below's answer. but if you think the code is not changed, try to turn down the container, delete and rebuild. using relative path instead of absolute path for volumes might help.
in express code, why dont you use res.redirect(`process.env.URL_REDIRECT`) ?
with URL_REDIRECT pointing either your URL_LIVE or URL_LOCAL.
or even better, make the logic inside your express code,
const URL_REDIRECT = (LIVE_OR_LOCAL === 'live')? process.env.URL_LIVE : process.env.URL_LOCAL
also, why you set env in dockerfile? not in docker-compose? its available (read: https://nodejs.dev/en/learn/how-to-read-environment-variables-from-nodejs/ )
also, just noticed, you are using docker-compose version 0.1 instead of recommended of version 3
Related
I have a React/Node app which i am trying to host on AWS amplify. first try, my app deployed but i saw some pages are not loading due to lack of the environment variables.
i have added them to the AWS console before deploying and it did not work. then i did some search and i saw that i need to modify "amplify.yml" file to:
build:
commands:
- npm run build:$BUILD_ENV
but not only it did not work, also the app is not working anymore.
any ideas?
As this question specifically references React, here are the steps you need to use environment variables in your React based application in AWS Amplify.
In your client-side JS:
const BUILD_ENV = process.env.REACT_APP_BUILD_ENV || "any-default-local-build_env";
The key thing to note here is my pre-fix of REACT_APP_ which is covered the Create-React-App docs: here
Now, in your Amplify console, under App Settings > Environment variables:
Finally, you also need to add this reference under App Settings > Build Settings:
Note: "BUILD_ENV" can be any string you wish. Within the environment variables you can provide the necessary DEV / PROD overrides.
DO NOT store SECRET KEYS using this method, AWS provide a secrets manager for this. This method is for publishable keys, like connecting to Firebase or Stripe and the key is fine to be public.
The Amplify documentation on Environmental Variables has a section on "Accessing Environmental Variables".
Per that documentation, in your Amplify.yml (either in the console or by downloading it to the root of your project), you can use a command to push Amplify Environmental Variables into your .env. If you created an Environmental Variable in Amplify called "REACT_APP_TEST_VARIABLE" you would do...
build:
commands:
- echo "REACT_APP_TEST_VARIABLE=$REACT_APP_TEST_VARIABLE" >> .env
- npm run build
Once in your .env you can access them through process.env...
console.log('REACT_APP_TEST_VARIABLE', process.env.REACT_APP_TEST_VARIABLE)
If you are using Create React App, you already have dotenv, or see Adding an .env file to React Project for more info.
Obligatory reminder to add your .env to your .gitignore, and don't store anything in .env that is sensitive.
To get #leandro's answer working I had to wrap the AWS environment variable names in curly braces:
build:
commands:
- npm run build
- VARIABLE_NAME_1=${VARIABLE_NAME_1}
Probably better as a comment but I don't have enough points to post one.
#A Zarqam Hey man, I ran into this issue ans spent a decent amount of time on it. What worked for me was:
On my React code, use process.env.VARIABLE_NAME
On my webpack.config.js use the following plug-in:
new webpack.EnvironmentPlugin(['VARIABLE_NAME_1', 'VARIABLE_NAME_2'])
On the Amplify environment variables put the VARIABLE_NAME_1,etc and then the values, just like in the docs says.
Last on the build settings:
build:
commands:
- npm run build
- VARIABLE_NAME_1=$VARIABLE_NAME_1
(the one with $ is a reference to the one you put in amplify. Also I think you must have no spaces between the = symbol)
Then trigger a build, and cross your fingers.
Just to add to other comments regarding Secret keys, since SO doesn't let me comment until 50 rep... If you're not using those Env Variables in your front-end app such as process.env.<var_name>, and only use them during build time, you're fine. Those files will not be bundled into your front-end app.
I know this question is related to frontend apps but its title appeared in search engines for me even though I was looking for build variables only, so it might be useful for other people too.
An add on to #leandro's for anyone checking for this in the future I just want to simplify what you need to do since I was completely lost on this:
In your code reference all API keys as process.env.EXAMPLE_API_KEY_1
Run this in your root folder terminal npm install react-app-rewired --save-dev
Add config-overrides.js to the project root directory.(NOT ./src)
// config-overrides.js
module.exports = function override(config, env) {
// New config, e.g. config.plugins.push...
return config
}
Set your variables in AWS Amplify with your key and variable, pretty self-explanatory.
In your build settings make it look something like this (I personally don't add npm build in here but you can if you need to.)
frontend:
phases:
build:
commands:
- EXAMPLE_API_KEY_1=${EXAMPLE_API_KEY_1}
- EXAMPLE_API_KEY_2=${EXAMPLE_API_KEY_2}
Start or restart your build.
I used #leandro's answer and mixed in an answer from this question to get it to work for me.
This worked for me to deploy React to Amplify
amplify.yml
version: 1
frontend:
phases:
preBuild:
commands:
- npm install
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
in App.js
const client = new ApolloClient({
uri:
process.env.NODE_ENV !== 'production'
? 'http://localhost:1337/graphql'
: process.env.REACT_APP_ENDPOINT,
cache: new InMemoryCache(),
});
I'm running a next.js react app in a docker container. It's being composed with several other contains: one running Ghost (I'm using the API), one running mysql, and one running NGINX. I've got everything running in development mode.
It works perfectly when run using next dev. But when I run it by doing next build and next start, I start seeing errors like Error: getaddrinfo ENOTFOUND ghost-api when I try to make server-side HTTP requests to my Ghost API container. I'm not entirely sure what the issue is but it seems like there's some issue with how Node is making requests after being built. I've been digging through a lot of Docker/Node questions trying to figure this one out but haven't had any luck.
The entire project can be found here: https://github.com/MichaelWashburnJr/react-cms
The problem may exist in the environment variable that you are using. In both getGhostApi and getGhostApiKey function, you are using the environment variable.
In NextJs you'll have to specify a next.config.js in which you define the variables that you need for
Ex. next.config.js
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
mySecret: 'secret',
secondSecret: process.env.SECOND_SECRET, // Pass through env variables
},
publicRuntimeConfig: {
// Will be available on both server and client
staticFolder: '/static',
},
}
You can also refer to the next documentation for the same.
https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration
I'm not able to reproduce the error. How are you starting the frontend container in prod mode?
From the error it appears like you might be trying to start the frontend container or the frontend app as a separate process without starting it as part of the compose project. If that is the case, the name ghost-api won't be resolvable and you would get the Error: getaddrinfo ENOTFOUND ghost-api error.
I've changed the command key of frontend container as follows:
command: [ "yarn", "start-prod" ]
Changed the "start-prod" script in frontend/package.json as follows:
"start-prod": "next build && NODE_ENV='production' next start"
and everything worked as it worked in dev mode. I got some UNKNOWN_CONTENT_API_KEY error in both dev and prod mode but definitely there is no ghost-api name resolution error.
After cloning your repos:
$ grep -R ST_API *
frontend/.env.development:GHOST_API_URL=http://ghost-api:2368
frontend/.env.production:GHOST_API_URL=http://ghost-api:2368
frontend/src/constants/Config.js:export const getGhostApi = () => process.env.GHOST_API_URL || 'http://localhost:8000';
ghost-api is not a domain name: to make it work you need to edit your hosts file or (for a real production environment) to change http://ghost-api:2368 in frontend/.env.production file with the real deploy domain name.
If you are asking why you can't trust on docker compose networking, the answer is: you can, but only in the containers; while the front end will run in the browser of your application client, which is outside the containers.
Hope this helps.
It seems that Docker's hostname resolution does not work during build time. That is why ghost-api is not found.
Instead of referencing the other container by its name (ghost-api), on Mac you can try host.docker.internal. On Linux, using host networking during build worked for me:
nextjs-app:
build:
network: "host"
# ...
network_mode: "host"
This way, you can reference the other container using localhost.
So,
I am using NUXT
I am deploying to google cloud run
I am using dotenv package with a .env file on development and it works fine.
I use the command process.env.VARIABLE_NAME within my dev server on Nuxt and it works great, I make sure that the .env is in git ignore so that it doesnt get uploaded.
However, I then deploy my application using the google cloud run... I make sure I go to the Enviroments tab and add in exactly the same variables that are within the .env file.
However, the variables are coming back as "UNDEFINED".
I have tried all sorts of ways of fixing this, but the only way I can is to upload my .env with the project - which I do not wish to do as NUXT exposes this file in the client side js.
Anyone come across this issue and know how to sort it out?
DOCKERFILE:
# base node image
FROM node:10
WORKDIR /user/src/app
ENV PORT 8080
ENV HOST 0.0.0.0
COPY package*.json ./
RUN npm install
# Copy local nuxt code to the container
COPY . .
# Build production app
RUN npm run build
# Start the service
CMD npm start
Kind Regards,
Josh
Finally I found a solution.
I was using Nuxt v1.11.x
From version equal to or greater than 1.13, Nuxt comes with Runtime Configurations, and this is what you need.
in your nuxt.config.js:
export default {
publicRuntimeConfig: {
BASE_URL: 'some'
},
privateRuntimeConfig: {
TOKEN: 'some'
}
}
then, you can access like:
this.$config.BASE_URL || context.$config.TOKEN
More details here
To insert value to the environment variables is not required to do it in the Dockerfile. You can do it through the command line at the deployment time.
For example here is the Dockerfile that I used.
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm","start"]
this is the app.js file
const express = require('express')
const app = express()
const port = 8080
app.get('/',(req,res) => {
const envtest = process.env.ENV_TEST;
res.json({message: 'Hello world',
envtest});
});
app.listen(port, () => console.log(`Example app listening on port ${port}`))
To deploy use a script like this:
gcloud run deploy [SERVICE] --image gcr.io/[PROJECT-ID]/[IMAGE] --update-env-vars ENV_TEST=TESTVARIABLE
And the output will be like the following:
{"message":"Hello world","envtest":"TESTVARIABLE"}
You can check more detail on the official documentation:
https://cloud.google.com/run/docs/configuring/environment-variables#command-line
thanks for taking time to help me
im deploying a nodejs express js project
these are the steps that i have done:
1- change the port to: process.env.PORT
code:
const PORT = process.env.PORT || 9000;
app.listen(PORT , function() {
console.log('Application is listening on 9000');
});
2- create Procfile with: web: node server.js
3- make sure in package json the npm start command points to "node path/server.js"
the server works locally
4- important note: I am sending an AJAX request from my front end to the server to get data
I have read on you documentation that i should add 0.0.0.0
$.ajax({
url: "0.0.0.0/hotels",
cache: false,
type: 'GET',
success: function(result) {
bla bla ....
}
});
also i have tried to add the url of heroku the one i get after creating
thanks in advance
have a great day
did not solve it yet but i organized some helpful heroku commands
useful commands
git remote -v
git remote rm heroku
heroku create
git push heroku master
heroku ps:scale web=1
heroku open
heroku logs --tail
heroku run bash
Your code there looks fine (except 0.0.0.0 -- just use a relative path like /). I would ensure you've actually pushed the changes you have there. If you run heroku run bash, do you see your Procfile? When you run node server.js in that environment, does it run successfully?
I've seen Heroku customers get stuck on an issue like this, when the reality is that the code they have locally wasn't properly sent to Heroku.
Hello #jmccartie thank you for replying but it still does not work
could it be the static __dirname? im starting to question every part of the code :D
I changed the path and just to make sure i understood correctly
it used to be : "http://localhost:9000/data/hotels"
now is: "/data/hotels"
would you mind taking a look at my code?
just double check the parts i mentioned
https://github.com/hibaAkroush/herokuNode
i will name the files to make it easier for you
1- Procfile in the root
2- server in server/index.js line 24
3- the front end (where im sending an ajax get request) client/home.js line 6
4- packagejson line 10: "start": "node server/index.js"
thanks
ok i fixed it ...
wohoo!
not sure which thing i made fixed it
but what i did was:
1- I moved the server to the root and of course changed the code a bit so it would still work than i tested it locally to make sure
2- pushed on github
3- added ./ to procfile so it became
web: node ./index.js
instead of web: node index.js
thanks everyone !
I have an app that uses the swagger-express-mw library and I start my app the following way:
SwaggerExpress.create({ appRoot: __dirname }, (err, swaggerExpress) => {
// initialize application, connect to db etc...
}
Everything works fine on my local OSX machine. But when I use boot2docker to build an image from my app and run it, I get the following error:
/usr/src/app/node_modules/swagger-node-runner/index.js:154
config.swagger = _.defaults(readEnvConfig(),
^
TypeError: Cannot assign to read only property 'swagger' of [object Object]
My dockerfile looks like this (nothing special):
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm set progress=false
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm", "run", "dev"]
Has anyone else encountered similar situations where the local environment worked but the app threw an error inside the docker container?
Thanks!
Your issue doesn't appear to be something wrong with your docker container or machine setup. The error is not a docker error, that is a JavaScript error.
The docker container appears to be running your JavaScript module in Strict Mode, in which you cannot assign to read-only object properties (https://msdn.microsoft.com/library/br230269%28v=vs.94%29.aspx). On your OSX host, from the limited information we have, it looks like it is not running in strict mode.
There are several ways to specify the "strictness" of your scripts. I've seen that you can start Node.js with a flag --use_strict, but I'm not sure how dependable that is. It could be that NPM installed a different version of your dependent modules, and in the different version they specify different rules for strict mode. There are several other ways to define the "strictness" of your function, module, or app as a whole.
You can test for strict mode by using suggestions here: Is there any way to check if strict mode is enforced?.
So, in summary: your issue is not inheritantly a docker issue, but it is an issue with the fact that your javascript environments are running in different strict modes. How you fix that will depend on where strict is being defined.
Hope that helps!