Can't access to localhost:8080 with docker on windows 10 - node.js

When running my docker-compose-development.yaml on my computer, I can't connect to http://localhost:8080.
Also, I can run docker-compose -f docker-compose-development.yaml exec web curl http://localhost:8080 and I got a result. So it seems to not be a code problem.
What I've already done:
Connect directly on container IP with $ docker inspect ...
Try on another Windows 10 laptop (it works)
Change localhost to 127.0.0.1 or 0.0.0.0
Try another port than 8080
This is my $ docker version :
Client:
Version: 17.11.0-ce
API version: 1.34
Go version: go1.8.4
Git commit: 1caf76c
Built: Mon Nov 20 18:30:11 2017
OS/Arch: windows/amd64
Server:
Version: 17.11.0-ce
API version: 1.34 (minimum version 1.12)
Go version: go1.8.5
Git commit: 1caf76c
Built: Mon Nov 20 18:39:28 2017
OS/Arch: linux/amd64
Experimental: true
This is my Dockerfile:
FROM node:9.1-alpine
RUN npm install -g nodemon
WORKDIR /opt/webserver/
COPY . /opt/webserver
RUN npm install
CMD ["npm","run","start"]
EXPOSE 8080
RUN rm -rf /tmp/* /var/tmp/*
This is my docker-compose-development.yaml:
version: "3"
services:
web:
image: registry.gitlab.com/soundtrack/webapp
ports:
- "8080:8080"
links:
- database
volumes:
- ".:/opt/webserver:rw"
database:
image: mongo:3.4.10
ps command from docker-compose:
$ docker-compose -f .\docker-compose-development.yaml ps
Name Command State Ports
--------------------------------------------------------------------------------
webapp_database_1 docker-entrypoint.sh mongod Up 27017/tcp
webapp_web_1 npm run start Up 0.0.0.0:8080->8080/tcp

I ran my container:
docker run -d -it -p 10080:80 --name=container-cool-name <container-id>
And I could see my running app with curl (inside the container)
docker exec -ti container-cool-name bash
#curl localhost:80
Here I read:
If you’re using Docker Toolbox
"docker-machine ip will tell you"
My app was correctly displaying at 192.168.99.100:10080

Try following steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webapp_web_1 listed there.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webapp_web_1
Try accessing the IP which is shown after running above command and access your running container with that IP instead of localhost.
I have answered to a simillar question related to this exact problem on Windows. Please refer it via this link to get a detailed idea on why this issue prevails.
Hope this helps you out.

Related

MongoDB and unrecognised option '--enableEncryption'

I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"

how to link two docker containers?

I am new to docker.I built a crawler with headless chrome But Now I have to deploy with docker and there is image for https://github.com/yukinying/chrome-headless-browser-docker and it will host remote debugging mode in port 9222 and there is another container my node app is running I don't know how to link these both container .
docker run -it --name nodeserver --link chrome:chrome nodeapp bash
But inside that docker I can't access the localhost:9222
I would suggest using docker-compose, it comes with docker for mac / windows and is made for this kind of simple connection.
You would need a docker compose file something like
version: "3"
services:
headless-browser:
image: yukinying/chrome-headless
ports:
- 9222
crawler:
build:
context: .
dockerfile: Dockerfile
links:
- headless-browser
And then a Docker file in the same folder
e.g. for testing connection use
FROM alpine
RUN apk update && apk add curl
CMD curl http://headless-browser:9222
Use the command docker-compose up
Output would be the console page in text (so you know the connection is working ok)
To avoid any issues with indentation... I've made a repo to copy and paste from: https://github.com/TheSmokingGnu/stackOverflowAnswer

docker daemon crashes randomly after add ENTRYPOINT - Docker

Start Docker Daemon
docker daemon -g /u01/docker
Docker Info
[bu#bu ~]$ docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Dockerfile
FROM nginxmagento
MAINTAINER Bilal
ENTRYPOINT service ssh restart && service nginx restart && service mysql restart && service cron restart && service php7.0-fpm restart && bash
Build Image
docker build -t magento .
Create Container
docker run -it -d --name magento -h host.name -v /u01/Bilal/test/_data:/var/www/html -p 3020:80 --net mynetwork --ip 172.18.0.51 --privileged magento
It successfully start service in ENTRYPOINT but it randomly shutdown the docker daemon, In docker daemon too no error thrown it just exit from terminal. I searched docker daemon log file by find linux command and this link but I can't find where it is located.
please give suggestions about why it is behave like this?
If I follow any bad practice, please mention?

DRY Config for Docker build and App

First off, this became a much longer post than I expected, tl;dr; How can I make my config DRY with docker (I'm assuming with env variables)
I am developing a node.js server with a mongodb database and I am switching over to docker to (hopefully) simplify my environment and keep it consistant for my server. Let me start out by saying that I definitely do not understand the entire docker-container life cycle, but I am learning as I go. I cannot find a solution to what I am trying to do after a lot of searching. I am using Docker for Windows with Hyper-V. (docker version at the bottom)
So, I would like to be able to configure
Database name
Database user
Database password
App port
and few other things
And allow these configured values to be used in several places including
Docker build phase (to setup database)
Node App (to connect to database)
What is the best way to be able to set the database name, user and pass, once and use it to initiate the database in the container, and then connect to it whenever the container is spun up?
Current Setup
Currently I have a docker-compose file that is the following (./docker-compose.yml)
version: "2"
services:
myapp:
image: node:boron
volumes:
- ./dist:/usr/src/app
working_dir: /usr/src/app
command: sh -c 'npm install; npm install -g nodemon; nodemon -e js app.js'
ports:
- "9021:9021"
depends_on:
- mongo
networks:
- all
mongo:
build: ./build/mongo
networks:
- all
networks:
all:
Mongo Dockerfile (./build/mongo/Dockerfile)
# Image to make this from
FROM mongo:3
# Add files into the container for db setup
ADD initDB.js /tmp/
ADD createDBAdmin.js /tmp/
ADD mongod.conf /data/configdb
# Map the host directory mongo/myapp to /data
VOLUME /mongo/myapp:/data
RUN ls /data/*
RUN mongod -f /data/configdb/mongod.conf && sleep 5 && mongo ${DB_NAME} /tmp/createDBAdmin.js && sleep 5 && mongo ${DB_NAME} /tmp/initDB.js
CMD mongod --smallfiles --storageEngine wiredTiger
The initDB.js script (./build/mongo/initdb.js)
db.createUser(
{
user: "${DB_USER}",
pwd: "${DB_PASS}",
roles: [
{ role: "dbOwner", db: "${DB_NAME}" }
]
},
{
w: "majority",
wtimeout: 5000
}
);
// For some reason this is needed to save the user
db.createCollection("test");
docker version
Client:
Version: 1.13.0
API version: 1.25
Go version: go1.7.3
Git commit: 49bf474
Built: Tue Jan 17 21:19:34 2017
OS/Arch: windows/amd64
Server:
Version: 1.13.0
API version: 1.25 (minimum version 1.12)
Go version: go1.7.3
Git commit: 49bf474
Built: Wed Jan 18 16:20:26 2017
OS/Arch: linux/amd64
Experimental: true
Extra credit
How can I persist database information outside of the container for backup purposes etc. and view the files? Do I need to map a directory on my host machine to data/db in the container and am i doing this correctly?
EDIT: This seems to be a good solution to the "Extra credit" question. How to deal with persistent storage (e.g. databases) in docker

Docker expose not working

question:
I've created a simple site using Docker and Aurelia. The site runs in Docker, but is not accessible from my localhost. What I did:
create container
docker build -t randy/node-web-app .
docker run -p 9000:9000 -d randy/node-web-app
97f57c3d0da5d03f53b4ba893fdb866ca528e10e6c4a1b310726e514d8957650
see if the scripts ran:
docker logs 97f57c3d0da5
Application Available At: http://localhost:9000
Going into docker container terminal to see if the site is up:
docker exec -it 97f57c3d0da5 /bin/bash
See if it runs:
curl -i localhost:9000
summary:
HTTP/1.1 200 OK
<!DOCTYPE html>
(I actually see the HTML that it should return, but that's too big to post here.)
return to host terminal:
exit
curl -i localhost:9000
curl: (7) Failed to connect to localhost port 9000: Connection refused
How can I make sure I can access that site from my pc? In the first command, I've set expose on 9000:9000 so that shouldn't be a problem.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install -g aurelia-cli
RUN npm install
EXPOSE 9000
CMD ["npm", "start"]
I used Kitematic instead of the normal version of Docker on the machine where I deployed this image. Kitematic maps the docker image to an (internal) IP address instead of the localhost of the host machine.
My answer was to use 192.168.1.67 as IP instead of 127.0.0.1.
If you install Docker without Kitematic, this should not be an issue.

Resources