Docker node and postgres in 1 container - node.js

I want to deploy my app on heroku so I wont be able to use more than one container. I want to run a postgres sql server and a node webserver at the same time in one container.
I tried this:
FROM node:12-alpine
WORKDIR /football_marketplace
COPY . .
RUN npm install -g pg
RUN apk add nano
USER postgres
CMD ["npm", "start"]
but when I try to use "psql" inside the container, it says that the command doesnt exist.
How would one do this?
All tutorials on the web show how to dockerize postgres with docker compose, but none of them show how I can do node and postgres in one container.

Related

Docker container bound to local volume doesn't update

I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?

running docker container is not reachable by browser

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

How to run a nodejs app in a mongodb docker image?

i am getting this error when i try to run the commande "mongo" in the container bash:
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :connect#src/mongo/shell/mongo.js:328:13 #(connect):1:6exception: connect failed
i'm trying to set up a new nodejs app in a mongo docker image. the image is created fine with dockerfile in docker hub and i pull it, create a container and every thing is good but when i try to tape "mongo" commande in the bash a get the error.
this is my dockerfile
FROM mongo:4
RUN apt-get -y update
RUN apt-get install -y nodejs npm
RUN apt-get install -y curl python-software-properties
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y nodejs
RUN node -v
RUN npm --version
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start"]
EXPOSE 3000
When your Dockerfile ends with CMD ["npm", "start"], it is building an image that runs your application instead of running the database.
Running two things in one container is slightly tricky and usually isn't considered a best practice. (You change your application code so you build a new image and delete and recreate your existing container; do you actually want to stop and delete your database at the same time?) You should run this as two separate containers, one running the standard mongo image and a second one based on a Dockerfile similar to this but FROM node. You might look into Docker Compose as a simple orchestration tool that can manage both containers together.
The one other thing that's missing in your example is any configuration that tells the application where its database is. In Docker this is almost never localhost ("this container", not "this physical host somewhere"). You should add a control to pass that host name in as an environment variable. In Docker Compose you'd set it to the name of the services: block running the database.
version: '3'
services:
mongodb:
image: mongodb:4
volumes:
- './mongodb:/data/db'
app:
build: .
ports: '3000:3000'
env:
MONGODB_HOST: mongodb
(https://hub.docker.com/_/mongo is worth reading in detail.)

Why is my Docker container not running my Nodejs app?

End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.

Resources