I'm learning how to use Google Sheets API v.4 to download data from a sheet --> my nodeJS server. I'm using Docker containers for my node app. Fails on either localhost or online at server in Docker. It will work fine on localhost, but not in a Docker container. I've whitelisted the IP address at the Google API console. (note: I'm easily able to use firebase API from this node server, not the Google Sheets v4 API)
ref: https://developers.google.com/sheets/api/quickstart/nodejs#step_4_run_the_sample
First time you run the app, the command line on the node server displays:
Authorize this app by visiting this url:
https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets.readonly&response_type=code&client_id=xxx.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob
You go to that URL, and that Google page displays:
Sign in
Please copy this code, switch to your application and paste it there.
4/xxxxxxxxxxxx
And here's the rub. No way will that work. I can copy and paste the 4/xxx token into the command line, but it's a fail. No error message, no nothing. No function either. Is there a way to get there from here? I know this works fine in a stand alone Node server on my desktop computer , but not in a docker container (either localhost or online). Is there a manual method for the authentication?
-----------Edit---------------------------------------------------------
I started looking at the code again, and the issue is a fail on node readline while using a docker container.
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
And that issue already exists here on StackOveflow.
Unable to get standard input inside docker container
duplicate of:
how to get docker container to read from stdin?
You need to run the container in interactive mode with --interactive
or -i:
Whoa... and how do you do that in a docker-compose deployment?
Interactive shell using Docker Compose
Ouch. No go on that posting. Didn't work at all for me.. See the answer provided below...
Info provided here in case anybody else hits this bump in the road.
So it turns out the solution was nowhere near that provided by Interactive shell using Docker Compose
I'm running a node server in a docker container. I wanted to use the terminal to insert a token upon container startup in response to Google sheet API call, using the Node readline method.
Instead the solution I came up with was the result of a note I saw in a docker compose github issue. A long slow read of docker compose functions got me to a better solution. It was as simple as:
$ docker-compose build
$ docker-compose run -p 8080:80 node
One important issue here... the word node is the name of my service as called out in the docker-compose.yml file below. This solution worked fine on both my localhost and at an online server via SSH terminal.
Dockerfile:
FROM node:8
RUN mkdir -p /opt/app
# set our node environment, either development or production
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 80 for node, and 5858 or 9229 for debug
ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229
# install dependencies first, in a different location for easier app bind mounting for local development
WORKDIR /opt
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean --force
ENV PATH /opt/node_modules/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/app
COPY . /opt/app
CMD [ "node", "./bin/www" ]
docker-compose.yml
version: '3.1'
services:
node: <---- Name of service in the container
build:
context: .
args:
- NODE_ENV=development
command: ../node_modules/.bin/nodemon ./bin/www --inspect=0.0.0.0:9229
ports:
- "80:80"
- "5858:5858"
- "9229:9229"
volumes:
- .:/opt/app
# this is a workaround to prevent host node_modules from accidently getting mounted in container
# in case you want to use node/npm both outside container for test/lint etc. and also inside container
# this will overwrite the default node_modules dir in container so it won't conflict with our
# /opt/node_modules location. Thanks to PR from #brnluiz
- notused:/opt/app/node_modules
environment:
- NODE_ENV=development
# tty: true ## tested, not needed
# stdin_open: true ## tested, not needed
volumes:
notused:
Many thanks to Bret Fisher for his work on node docker defaults.
Related
I am trying to set up a skeleton project for a web app. Since I have no experience using docker I followed this tutorial for a Flask+Vue+Docker setup:
https://www.section.io/engineering-education/how-to-build-a-vue-app-with-flask-sqlite-backend-using-docker/
The backend and frontend on their own run correct, now I wanted to dockerize the parts as described with docker-compose and separate containers for back- and frontend. Now when I try to connect to localhost://8080 I get this:
"This page isnt working, localhost did not send any data"
This is my frontend dockerfile:
#Base image
FROM node:lts-alpine
#Install serve package
RUN npm i -g serve
# Set the working directory
WORKDIR /app
# Copy the package.json and package-lock.json
COPY package*.json ./
# install project dependencies
RUN npm install
# Copy the project files
COPY . .
# Build the project
RUN npm run build
# Expose a port
EXPOSE 5000
# Executables
CMD [ "serve", "-s", "dist"]
and this is the docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
ports:
- 5000:5000
frontend:
build: ./frontend
ports:
- 8080:5000
In the Docker-Desktop GUI for the frontend container I get the log message "Accepting connections at http://localhost:3000" but when I open it in browser it connects me to the 8080 port.
During research I found that many people say I have to make the app serve on 0.0.0.0 to work from a docker container, but I don't know how to configure that. I tried adding
devServer: {
public: '0.0.0.0:8080'
}
to my vue.config.js which did not change anything. Others suggested to change the docker run command to incorporate the host change, but I don't use that but use docker-compose up to start the app.
Sorry for my big confusion, I hope someone can help me out here. I really hope it's something simple I am overlooking.
Thanks to everyone trying to help in advance!
I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it
I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped
End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.
I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?
Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"