How to run a nodejs app in a mongodb docker image? - node.js

i am getting this error when i try to run the commande "mongo" in the container bash:
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :connect#src/mongo/shell/mongo.js:328:13 #(connect):1:6exception: connect failed
i'm trying to set up a new nodejs app in a mongo docker image. the image is created fine with dockerfile in docker hub and i pull it, create a container and every thing is good but when i try to tape "mongo" commande in the bash a get the error.
this is my dockerfile
FROM mongo:4
RUN apt-get -y update
RUN apt-get install -y nodejs npm
RUN apt-get install -y curl python-software-properties
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y nodejs
RUN node -v
RUN npm --version
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start"]
EXPOSE 3000

When your Dockerfile ends with CMD ["npm", "start"], it is building an image that runs your application instead of running the database.
Running two things in one container is slightly tricky and usually isn't considered a best practice. (You change your application code so you build a new image and delete and recreate your existing container; do you actually want to stop and delete your database at the same time?) You should run this as two separate containers, one running the standard mongo image and a second one based on a Dockerfile similar to this but FROM node. You might look into Docker Compose as a simple orchestration tool that can manage both containers together.
The one other thing that's missing in your example is any configuration that tells the application where its database is. In Docker this is almost never localhost ("this container", not "this physical host somewhere"). You should add a control to pass that host name in as an environment variable. In Docker Compose you'd set it to the name of the services: block running the database.
version: '3'
services:
mongodb:
image: mongodb:4
volumes:
- './mongodb:/data/db'
app:
build: .
ports: '3000:3000'
env:
MONGODB_HOST: mongodb
(https://hub.docker.com/_/mongo is worth reading in detail.)

Related

Docker node and postgres in 1 container

I want to deploy my app on heroku so I wont be able to use more than one container. I want to run a postgres sql server and a node webserver at the same time in one container.
I tried this:
FROM node:12-alpine
WORKDIR /football_marketplace
COPY . .
RUN npm install -g pg
RUN apk add nano
USER postgres
CMD ["npm", "start"]
but when I try to use "psql" inside the container, it says that the command doesnt exist.
How would one do this?
All tutorials on the web show how to dockerize postgres with docker compose, but none of them show how I can do node and postgres in one container.

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

Running docker compose inside Docker Container

I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.

How can I connect to my Verdaccio service launched as docker container from another docker container?

I am trying to build an npm repository which will be used on an offline system. My idea is to build a ready docker container, which will already contain all the packages needed for a given project - downloading the packages will be based on the package.json file.
To implement my idea, I need to run server verdaccio on one container, then the other container will run the npm install command, thanks to which the appropriate files with ready npm packages will be generated.
However, I cannot cope with waiting for the launch of the first container. So far I have tried to use the wait-for.sh and wait-for.sh scripts (https://docs.docker.com/compose/startup-order/), but they are not able to connect to the given address.
P.S I am using Docker for Windows
docker-compose.yml
version: '3.1'
services:
listen:
build: listen
image: listen-img
container_name: listen
environment:
- VERDACCIO_PORT=4873
ports:
- "4873:4873"
download:
build: download
image: download-img
container_name: download
depends_on:
- listen
networks:
node-network:
driver: bridge
server dockerfile
FROM verdaccio/verdaccio:4
'npm install trigger' docker file
FROM node:15.3.0-alpine3.10
WORKDIR /usr/src/cached-npm
COPY package.json .
COPY wait-for.sh .
COPY /config/htpasswd /verdaccio/conf/htpasswd
USER root
RUN npm set registry http://host.docker.internal:4873
RUN chmod +x /usr/src/cached-npm/wait-for.sh
RUN /usr/src/cached-npm/wait-for.sh host.docker.internal:4873 -- echo "Listen is up"
RUN npm install
Is there something like a lack of shared ports missing from my solution, or are there other issues that are causing my approach to fail?
It turned out that the problem was to mix up two processes - building and launching the appropriate container. In my solution so far, I wanted to build both containers at the same time, while one of them needed an already running instance of the first to be built.

running docker container is not reachable by browser

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Resources