I am new to docker.I built a crawler with headless chrome But Now I have to deploy with docker and there is image for https://github.com/yukinying/chrome-headless-browser-docker and it will host remote debugging mode in port 9222 and there is another container my node app is running I don't know how to link these both container .
docker run -it --name nodeserver --link chrome:chrome nodeapp bash
But inside that docker I can't access the localhost:9222
I would suggest using docker-compose, it comes with docker for mac / windows and is made for this kind of simple connection.
You would need a docker compose file something like
version: "3"
services:
headless-browser:
image: yukinying/chrome-headless
ports:
- 9222
crawler:
build:
context: .
dockerfile: Dockerfile
links:
- headless-browser
And then a Docker file in the same folder
e.g. for testing connection use
FROM alpine
RUN apk update && apk add curl
CMD curl http://headless-browser:9222
Use the command docker-compose up
Output would be the console page in text (so you know the connection is working ok)
To avoid any issues with indentation... I've made a repo to copy and paste from: https://github.com/TheSmokingGnu/stackOverflowAnswer
Related
I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.
How do I run all my Node.js file in a single container?
app1.js running on port 1001
app2.js running on port 1002
app3.js running on port 1003
app4.js running on port 1004
Dockerfile
FROM node:latest
WORKDIR /rootfolder
COPY package.json ./
RUN npm install
COPY . .
RUN chmod +x /script.sh
RUN /script.sh
script.sh
#!/bin/sh
node ./app1.js
node ./app2.js
node ./app3.js
node ./app4.js
You would almost always run these in separate containers. You're allowed to run multiple containers from the same image, you can override the default command for an image when you start it up, and you can remap the ports an application uses when you start it.
In your Dockerfile, delete the RUN /script.sh line at the end. (That will try to start the servers during the image build, which you don't want.) Now you can build and run containers:
docker build -t myapp . # build the image
docker network create mynet # create a Docker network
docker run \ # run the first container...
-d \ # in the background
--net mynet \ # on that network
--name app1 \ # with a known name
-p 1001:3000 \ # publishing a port
myapp \ # from this image
node ./app1.js # running this command
docker run \
-d \
--net mynet \
--name app2 \
-p 1002:3000 \
myapp \
node ./app2.js
(I've assumed all of the scripts listen on the default Express port 3000, which is the second port number in the -p options.)
Docker Compose is a useful tool for running multiple containers together and can replicate this functionality. A docker-compose.yml file matching this setup would look like:
version: '3.8'
services:
app1:
build: .
ports:
- 1001:3000
command: node ./app1.js
app2:
build: .
ports:
- 1002:3000
command: node ./app2.js
Compose will create a Docker network on its own, and take responsibility for naming the images and containers. docker-compose up will start all of the services in parallel.
You need to expose the ports first using:
EXPOSE 1001
...
EXPOSE 1004
in your dockerfile and later run the container using the -p parameter as with -p 1501:1001
to expose -for example- the port 1501 of the host to work as the 1001 port of the container.
ref: https://docs.docker.com/engine/reference/commandline/run/
However, it is suggested to minimize the number of programs to be run from a docker container. So, you might like to have a container for each of your js scripts.
Yet, Nothing stops you from using:
docker exec -it yourDockerMachineName bash
several times where you can use each of your node cmds.
What you are trying to achieve is considered to be an anti-pattern.
Conversely, having in mind the single-responsibility-principle when building up the stacks of your apps will give you better leverages to manage, monitor, change your app etc.
This article from the official documentation explains when you might want to do this.
If you want to manage multiple containers as a whole, having one Dockerfile for each js, combined with a docker-compose file to bring up all the containers at once on different ports might answer your question. Here is a minimal example:
docker-compose.yml
version: '3.7'
services:
app1:
image: your-js-app-1-image
container_name: app-1
ports:
- '1001:3000'
app2:
image: your-js-app-2-image
container_name: app-2
ports:
- '1002:3000'
Ideally you should run each app on a separated container, if your applications are different. In the case they are equal and you want to run multiple instances on different ports
docker run -p <your_public_tcp_port_number>:3000 <image_name>
or a good docker-compose.yaml would suffice.
Technically you may want to run each different application on a different container and run multiple instances of the same application in order to make it easy to version each of your app on a newer independent image. It will allows you to independently stop, deploy and start your apps on the production environment.
I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped