Accessing container-side Postgres database via container-side Node.js web app - node.js

I have a Dockerfile for a Node.js app that overall looks like this:
FROM ubuntu
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
WORKDIR /app
COPY . /app
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
wget \
postgresql-10 postgresql-client-10 postgresql-contrib-10
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER warbler WITH SUPERUSER;" &&\
createdb -O warbler warbler_store
# ... node setup stuff ...
# ...
# ...
RUN psql -U warbler -d warbler_store -f db_v1.sql
CMD ["node", "index.js"]
With this though I get the following error message:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked around online a bit and the majority of solutions I've found seem to say that docker is trying to connect to the host postgres instance, among other questions that have to do with docker containers whose primary purpose is to run PostgreSQL. Is this accurate, and if so is it still possible to run a container-side PostgreSQL instance that's accessible by the primary application?

Apparently it's not good practice to have your database in the same Docker container as your web app. I changed my container structure by just using the postgresql:10 image in another container and having the web app communicate with it via docker-compose. docker-compose allows one to define services and Docker's internal DNS will allow them to communicate with each other
This is what the docker-compose.yaml looks like:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
links:
- db
db:
image: postgres:10
environment:
- POSTGRES_USER=warbler
- POSTGRES_DB=warbler_store
volumes:
- ./db_v1.sql:/docker-entrypoint-initdb.d/db_v1.sql
- dbdata:/var/lib/postgresql/data
volumes:
dbdata:
In this case, in the Dockerfile for web, I set the necessary environment variables so Node connects to the database running at postgresql://db:5432/warbler_store. db resolves via DNS to the IP address of the container running the db service, which is the postgres image container.

Related

Docker app does not read daily updated data

I have developed a docker app that reads data from a folder on my server (/myapp1/app/data). The data are updated daily in this folder. If I run the docker app on my domain, the app reads the data from the folder, but when these data are updated the app doesn't read the new data, it only reads the old data. If I down the container and run it again, then the app does read the new data.
My dockerfile is the following:
# get shiny server and R from the rocker project
FROM rocker/shiny:4.0.5
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libxt-dev \
libssl-dev \
libxml2 \
libxml2-dev \
libsodium-dev
# install R packages required
RUN R -e "install.packages(c('shiny', 'shinythemes', 'dygraphs', 'shinyWidgets', 'manipulateWidget', 'DT', 'zoo', 'shinyjs','emayili', 'wordcloud2', 'rmarkdown', 'xts', 'shinyauthr', 'curl', 'jsonlite', 'httr', 'lubridate'), repos='http://cran.rstudio.com/')"
# copy the app directory into the image
WORKDIR /myapp1/app
COPY app .
# run app
EXPOSE 8090
CMD ["R", "-e", "shiny::runApp('/myapp1/app', host = '0.0.0.0', port = 8090)"]
My docker-compose.yml file is the following:
version: "3.7"
services:
app1:
image: myapp1image
container_name: myapp1container
expose:
- "8090"
environment:
- VIRTUAL_PORT=8090
- VIRTUAL_HOST=myapp1.net,www.myapp1.net
- LETSENCRYPT_HOST=myapp1.net,www.myapp1.net
- LETSENCRYPT_EMAIL=info#myapp1.net
volumes:
- /myapp1/app/data:/myapp1/app/data
networks:
- mynetwork
networks:
mynetwork:
external : true
My app should read the updated data without having to down and run the container every time the data is updated, so I would appreciate a solution to the problem raised above.

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

Running chown in Dockerfile does nothing

I'm having some trouble setting up a Nuxt and Rails container using Docker. The two containers are separate, but interact with each other.
Currently, I'm having trouble running the dev servers for both the Nuxt and the Rails containers due to insufficient permissions. Looking at the logs for both of the containers, it seems that Docker can't do actions such as mkdir.
EACCESS: Permission Denied: 'mkdir: /usr/src/app/.nuxt' # nuxt
EACCESS: Permission Denied: 'mkdir: /usr/src/app/tmp' # rails
My docker-compose.dev.yml file
version: 3
services:
backend:
privileged: true
image: tablevibes-backend
build:
dockerfile: Dockerfile-dev
context: tablevibes-backend
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./tablevibes-backend:/usr/src/app:Z
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
depends_on:
- db
user: rails
client-ui:
image: client-ui
command: yarn run dev
build:
context: client-ui
dockerfile: Dockerfile-dev
args:
UID: ${UID:-1001}
PORT: 5000
MODE: DEV
restart: always
volumes:
- ./client-ui:/usr/src/app
- client_ui_node_modules:/usr/src/app/node_modules:cached
ports:
- 5000:5000
user: client-ui
The 2 Dockerfiles
The Rails Dockerfile-dev
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
RUN adduser rails --uid $UID --disabled-password --gecos ""
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
WORKDIR /usr/src/app
CMD chown -R rails /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
The Nuxt Dockerfile-dev
FROM node:10
ARG UID
ARG MODE=DEV
ARG PORT
RUN adduser client-ui --uid $UID --disabled-password --gecos ""
RUN apt-get update
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
RUN chown -R client-ui /usr/src/app
COPY package.json yarn.lock /usr/src/app
RUN yarn install
COPY . /usr/src/app
ENV API_URL=http://localhost:3000/v1
ENV REVIEW_URL=http://localhost:8000
# expose 5000 on container
EXPOSE $PORT
# set app serving to permissive / assigned
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=$PORT
My problem is that those lines where I do RUN chown ... never seem to take place. If I manually go into the containers with docker exec -u root -it backend bash and run chown -R rails . manually, everything works as expected. Likewise I tried running chmod 777 as a test, however that also had no effect on the permission denied error I keep getting.
What might be causing Docker to ignore my chown command?
This Stack Overflow question seems relevant, however it doesn't quite apply because I don't have any VOLUME mounts inside my Dockerfiles. A user in the comments of the accepted answer has my same issue, though unfortunately no solution.
Containers are like ogres, or onions, they have layers.
When using VOLUMEs or MOUNTs, the directory (or file) is not actually IN the container, but only appears to be in it.
Your Dockerfile uses a layer for /usr/src/app, which as you probably already know is your ./tablevibes-backend directory on your host computer.
services:
backend:
volumes:
- ./tablevibes-backend:/usr/src/app:Z
When you use a volume or mount in this way, the only thing Docker can do is simple CRUD (create, read, update, delete) options, it can not (and should not) modify the metadata as it is modifying your host drive, which could be a security issue.
try this:
sudo chown -R $USER:$USER .

How to run a nodejs app in a mongodb docker image?

i am getting this error when i try to run the commande "mongo" in the container bash:
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :connect#src/mongo/shell/mongo.js:328:13 #(connect):1:6exception: connect failed
i'm trying to set up a new nodejs app in a mongo docker image. the image is created fine with dockerfile in docker hub and i pull it, create a container and every thing is good but when i try to tape "mongo" commande in the bash a get the error.
this is my dockerfile
FROM mongo:4
RUN apt-get -y update
RUN apt-get install -y nodejs npm
RUN apt-get install -y curl python-software-properties
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y nodejs
RUN node -v
RUN npm --version
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start"]
EXPOSE 3000
When your Dockerfile ends with CMD ["npm", "start"], it is building an image that runs your application instead of running the database.
Running two things in one container is slightly tricky and usually isn't considered a best practice. (You change your application code so you build a new image and delete and recreate your existing container; do you actually want to stop and delete your database at the same time?) You should run this as two separate containers, one running the standard mongo image and a second one based on a Dockerfile similar to this but FROM node. You might look into Docker Compose as a simple orchestration tool that can manage both containers together.
The one other thing that's missing in your example is any configuration that tells the application where its database is. In Docker this is almost never localhost ("this container", not "this physical host somewhere"). You should add a control to pass that host name in as an environment variable. In Docker Compose you'd set it to the name of the services: block running the database.
version: '3'
services:
mongodb:
image: mongodb:4
volumes:
- './mongodb:/data/db'
app:
build: .
ports: '3000:3000'
env:
MONGODB_HOST: mongodb
(https://hub.docker.com/_/mongo is worth reading in detail.)

Inside Docker container NetworkingError: connect ECONNREFUSED 127.0.0.1:8002

I'm building up a nodejs app which is running in the docker container and getting following error
NetworkingError: connect ECONNREFUSED 127.0.0.1:8000"
And If I tried with dynamodb-local:8000 then it will give me following error
NetworkingError: write EPROTO
140494555330368:error:1408F10B:SSLroutines:ssl3_get_record:wrong
version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:252:
I am using the following docker-compose.yml
version: "3"
services:
node_app:
build: .
container_name: 'node_app'
restart: 'always'
command: 'npm run start:local'
ports:
- "3146:3146"
links:
- dynamodb-local
dynamodb-local:
container_name: 'dynamodb-local'
build: dynamodb-local/
restart: 'always'
ports:
- "8000:8000"
Node js docker configuration as follows, node_app
FROM node:latest
RUN mkdir -p /app/node_app
WORKDIR /app/node_app
# Install app dependencies
COPY package.json /app/node_app
#RUN npm cache clean --force && npm install
RUN npm install
# Bundle app source
COPY . /app/node_app
# Build the built version
EXPOSE 3146
#RUN npm run dev
CMD ["npm", "start"]
Dynamo DB local docker configuration as follows, dynamodb-local
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8000
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8000"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
But when I tried to connect outside docker container to local dynamodb and it will work perfectly.
Please help me to sort out this issue.
Inside the docker container, the DB will be available with the host dynamodb-local:8000.
It might be an SSL issue, please check your apache configuration if you have used the port for other application.
For that case, you can use link dynamo on another port as follows,
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8004
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8004"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
Now in your docker container, the DB will be available with the host dynamodb-local:8004.

Resources