How to solve "User `postgres` was denied access on the database `.public`";? - node.js

I use prisma and node.js.
When I called some functions (example prisma.users.findAll()) in docker container I have error User 'postgres' was denied access on the database 'my_db.public', but if I run in local I don't have any problem.
However, my containers are successfully, but when I call any function with database I had an error.
My docker file
FROM node:15.13.0
RUN mkdir -p /project/node_modules && chown -R node:node /project
WORKDIR /project
COPY package*.json ./
COPY --chown=node:node prisma ./prisma
COPY config ./config
RUN npm install
RUN npx prisma generate
RUN npx prisma db push --preview-feature
COPY --chown=node:node ./temp ./temp
COPY --chown=node:node . .
CMD [ "node", "index.js" ]
Also, my db
my_db | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
prisma settings
DATABASE_URL=postgresql://postgres:password#172.17.0.1:5432/my_db?connect_timeout=300&connection_limit=150

If you can get the same error message by running either of these two prisma commands in your project,
npx prisma db pull
npx prisma generate
it means that the error comes from a bad DATABASE_URL value.
For example: the user should be chris instead of postgres.
// wrong user
DATABASE_URL="postgresql://postgres:passw#localhost:5432/dbname?schema=public"
// correct user
DATABASE_URL="postgresql://chris:passw#localhost:5432/dbname?schema=public"
Make sure you have the correct values.

Related

Build Nest.js in docker got a Prisma erorr

I am building an application in nest.js ,then I want to dockerize it by using docker, this is my docker file:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
COPY . .
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
Then when I run :
docker build -t medicine-api .
I got this erorr from prisma
Module '"#prisma/client"' has no exported member 'User'.
3 import { User } from '#prisma/client';
and this is my prisma.schema file
/ This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
generator prismaClassGenerator {
provider = "prisma-class-generator"
dryRun = false
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id Int #id #default(autoincrement())
phoneNumber String #unique
lastName String
firstName String
role Role
bio String?
certificate String?
pic String?
verified Boolean #default(false)
medicine Medicine[]
pharmacyMedicine PharmacyMedicine[]
medicineCategory MedicineCategory[]
pharmacyPackage PharmacyPackage[]
pharmacistOrder Order[] #relation("pharmacistOrder")
userOrder Order[] #relation("userOrder")
}
I try to fix this by searching through difference resource and website, then they recommend me to put npx prisma generate in my dockefil. But still I get another erorr here:
Error: Generator at prisma-class-generator could not start:
/bin/sh: 1: prisma-class-generator: not found
If you have any solutions , I am really happy to try. Thanks in advance.
You have to generate the prisma client by running the command
yarn prisma generate
this should come before the step of coping the prisma folder
so I would suggest to change the dockerFile to be
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
COPY . .
RUN yarn prisma generate
COPY prisma ./prisma/
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
the prisma generate step will make sure you will have the prisma client in your node_modules so that it can be used to migrate the prisma models

Can't connect to SSH from docker container ECONNREFUSED 127.0.0.1:22 - NodeJS

I'm using node-ssh module on nodejs. When I start the connection to ssh it's giving error. Also I'm using WSL Ubuntu 18. I have docker-compose file. I marked PasswordAuthentication as 'yes' on /etc/ssh/sshd_config. I can connect ssh from wsl ubuntu. But when I was trying to connect from my dockerized nodejs project. It's giving error ECONNREFUSED 127.0.0.1:22
On nodejs I'm making a request for user authentication, running some commands, etc.
const Client = require('node-ssh').NodeSSH;
var client = new Client();
client.connect({
host : 'localhost',
port : 22,
username : req.body.username,
password : req.body.password,
keepaliveInterval : 30 * 1000, // 30 minutes for idle as milliseconds
keepaliveCountMax : 1,
}).then(()=>{
// LOGIN SUCCESS
}).catch((e)=>{
console.log(e); // ECONFUSED ERROR
// LOGIN FAILED
});
docker-compose.yml
version: '3.8'
services:
api:
build:
dockerfile: Dockerfile
context: "./server"
ports:
- "3030:3030"
depends_on:
- mysql_db
volumes:
- /app/node_modules
- ./server:/app
...
And my api's Dockerfile
Dockerfile
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apk update \
&& apk add openssh-server
COPY sshd_config /etc/ssh/
EXPOSE 22
CMD ["npm", "run", "start"]
[UPDATE__]
[Dockerfile]
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i \
&& apk add --update openssh \
&& rm -rf /tmp/* /var/cache/apk/*
COPY sshd_config /etc/ssh/
# add entrypoint script
ADD ./docker-entrypoint.sh /usr/local/bin
# make sure we get fresh keys
RUN rm -rf /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/sbin/sshd","-D"]
[UPDATE__2] [Dockerfile]
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apk update && \
apk add openssh-client \
&& rm -rf /tmp/* /var/cache/apk/*
EXPOSE 22
CMD ["npm", "run", "start"]
[SOLUTION]
I have changed Dockerfile and my nodejs code. I have connected WSL's SSH from docker container after applying as Stefan Golubović suggested host.docker.internal. And used node:latest instead of node:alpine docker image. Thanks to #StefanGolubović and #Etienne Dijon
[FIXED]
const Client = require('node-ssh').NodeSSH;
var client = new Client();
client.connect({
host : 'host.docker.internal', // It's worked on WSL2
port : 22,
username : req.body.username,
password : req.body.password,
keepaliveInterval : 30 * 1000, // 30 minutes for idle as milliseconds
keepaliveCountMax : 1,
}).then(()=>{
// LOGIN SUCCESS
}).catch((e)=>{
console.log(e); // ECONFUSED ERROR
// LOGIN FAILED
});
Dockerfile [FIXED]
FROM node:latest
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apt-get update
EXPOSE 22
CMD ["npm", "run", "start"]
Short answer
sshd server is not started automatically by default on alpine.
You may use an other node image to run your application like node:latest
https://hub.docker.com/_/node
based on debian, equivalent version alternative to node:alpine
Try to avoid ssh in a docker container, you may use a script as entrypoint to configure your container at runtime
Documentation : https://docs.docker.com/engine/reference/builder/#entrypoint
Best practices with example of script : https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint
Test step by step your dockerfile
Something you can do to make sure everything works fine is to run it manually
docker run -it --rm --name testalpine -v $PWD:/app/ node:alpine /bin/sh
Then :
cd /app/
npm i
apk update && apk add openssh-server
# show listening services, openssh is not displayed
netstat -tlpn
As you can see, openssh is not started automatically
Alpine has a wiki about it which needs rc-update :
https://wiki.alpinelinux.org/wiki/Setting_up_a_ssh-server
rc-update is not available in alpine image.
Running sshd server in an alpine container
This image is all about running a ssh server on alpine :
https://github.com/danielguerra69/alpine-sshd
As you can see in Dockerfile, more steps are involved :
Check repository for updated dockerfile
FROM alpine:edge
MAINTAINER Daniel Guerra <daniel.guerra69#gmail.com>
# add openssh and clean
RUN apk add --update openssh \
&& rm -rf /tmp/* /var/cache/apk/*
# add entrypoint script
ADD docker-entrypoint.sh /usr/local/bin
# make sure we get fresh keys
RUN rm -rf /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/sbin/sshd","-D"]
EDIT: If you need to run commands within your container
You can use docker exec once your container is started:
docker exec -it <container name/id> /bin/sh
documentation here :
https://docs.docker.com/engine/reference/commandline/exec/
Updated dockerfile
FROM node:alpine
WORKDIR /app
COPY ./ ./
RUN npm i
ENTRYPOINT ["npm", "run", "start"]

Permission denied: Missing write access to /app - docker error

FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ENV APP_URL=http://api.myapp.com
EXPOSE 3000
CMD ["npm", "start"]
This is my docker file. I am trying to dockerize a sample react-app. I added user in group and then using that user for further commands as you can see this in second line of this code. I believe by default, only root user has access to write to these files and in order to do changes in these files, root user should not be user. Hence I created app user here.
But after running docker build -t react-app.. I am getting the following error -
What am I doing wrong here? Any suggestions?
After adding Run ls -la and Run whoami -
You are seeing this error because /app directory belongs to root user. The user app which you created has no write permission to this directory. User app needs write permission to install node packages (create node_modules directory and package-lock.json file).
As suggested by #DavidMaze in comments, it would be easy to do package installation as root user and switch to USER app at the last but before the runtime CMD ["npm", "start"].
But still app user would need write permission to node_modules/.cache directory when running the app with npm start command. Hence, we need to provide the write permission to user app for this directory.
Here is an example that does all mentioned above:
FROM node:14.16.0-alpine3.13
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ENV APP_URL=http://api.myapp.com
EXPOSE 3000
RUN addgroup app && adduser -S -G app app
RUN mkdir node_modules/.cache
RUN chown app:app node_modules/.cache
USER app
CMD ["npm", "start"]
Also, note that you are running the React App in development mode using npm start, you might want to use a static server to serve, after creating a build.
when you create user with name "app", a new directory for this user is created in home directory "/home/app"
so,workdir should be like this:
WORKDIR /home/app

ConnectionError in Docker with Nodejs (Hapi.js) and Prisma

I would like to put my Nodejs app into a docker. When deploying it via npm run build and start I can send requests to it.
But when creating a docker image I getting problems:
First I have an EXPOSE 8080 in my Dockerfile. Then I am running docker run -p=3000:8080 --env-file .env my-docker-file. After that I am getting the info that the server is running on http://localhost:3000.
I know localhost:3000 ist just in the docker file. But at least the docker is running.
When I use the command http localhost:3000 (or the browser) I am getting http: error: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) while doing a GET request to URL: http://localhost:3000/.
Does someone have an idea what's going wrong??? I have no clue.
tanks to all hints that directs me into the right direction.
My Dockerfile:
## this is the stage one , also know as the build step
FROM node:12.17.0-alpine as builder
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
COPY tsconfig.json .
COPY src ./src/
COPY tests ./tests/
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
## this is stage two , where the app actually runs
FROM node:12.17.0-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 8080
CMD npm start
If you use a Dockerfile, first you better to build your image.
FROM node:12.17.0-alpine as builder
WORKDIR /app
COPY . .
RUN npm install
RUN npx prisma generate
RUN npm run build
## this is stage two , where the app actually runs
FROM node:12.17.0-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 8080
CMD ["npm","start"]
From the location where your Dockerfile located:
docker build -t your-image-name .
docker run -p 3000:8080 --env-file .env your-image-name
Did you check the IP address?
When i first deploy my Node project to Docker, i couldn't access it too, because my Node project was listening for localhost requests. But if you don't specify your network as host, your Docker container will have some other IP address in your subnet.
I've changed my Node projects listening IP address to 0.0.0.0 and after that i could connect to my Node project running in a Docker container.

permission denied, mkdir in container on openshift

I have a container with nodejs and pm2 as start command and on OpenShift i get this error on startup:
Error: EACCES: permission denied, mkdir '/.pm2'
I tried same image on a Marathon hoster and it worked fine.
Do i need to change something with UserIds?
The Dockerfile:
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN mkdir /src
COPY . /src
WORKDIR /src
RUN yarn install --production
EXPOSE 8100
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
Update
the node image already creates a new user "node" with UID 1000 to not run the image as root.
I also tried to fix permissions and adding user "node" to root group.
Further i told pm2 to which dir it should use with ENV var:
PM2_HOME=/home/node/app/.pm2
But i still get error:
Error: EACCES: permission denied, mkdir '/home/node/app/.pm2'
Updated Dockerfile:
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN adduser node root
COPY . /home/node/app
WORKDIR /home/node/app
RUN chmod -R 755 /home/node/app
RUN chown -R node:node /home/node/app
RUN yarn install --production
EXPOSE 8100
USER 1000
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
Update2
thanks to Graham Dumpleton i got it working
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN adduser node root
COPY . /home/node/app
WORKDIR /home/node/app
RUN yarn install --production
RUN chmod -R 775 /home/node/app
RUN chown -R node:root /home/node/app
EXPOSE 8100
USER 1000
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
OpenShift will by default run containers as a non root user. As a result, your application can fail if it requires it runs as root. Whether you can configure your container to run as root will depend on permissions you have in the cluster.
It is better to design your container and application so that it doesn't have to run as root.
A few suggestions.
Create a special UNIX user to run the application as and set that user (using its uid), in the USER statement of the Dockerfile. Make the group for the user be the root group.
Fixup permissions on the /src directory and everything under it so owned by the special user. Ensure that everything is group root. Ensure that anything that needs to be writable is writable to group root.
Ensure you set HOME to /src in Dockerfile.
With that done, when OpenShift runs your container as an assigned uid, where group is root, then by virtue of everything being group writable, application can still update files under /src. The HOME variable being set ensures that anything written to home directory by code goes into writable /src area.
You can also run the below command which grants root access to the project you are logged in as:
oc adm policy add-scc-to-user anyuid -z default
Graham Dumpleton'solution is working but not recommended.
Openshift, will use random UIDs when running containers.
You can see that in the generated Yaml of your Pod.
spec:
- resources:
securityContext:
runAsUser: 1005120000
You should instead apply Docker security best practices to write your Dockerfile.
Do not bind the execution of your application to a specific UID : Make resources world readable (i.e., 0644 instead of 0640) and executable when needed.
Make executables owned by root and not writable
For a full list of recommendation see : https://sysdig.com/blog/dockerfile-best-practices/
In your case, there is not need to :
RUN adduser node root
...
RUN chown -R node:node /home/node/app
USER 1000
In the original question, the application files are already owned by root.
The following chmod is enough to make them readable and executable to the world.
RUN chmod -R 775 /home/node/app
What kind of openshift are you using ?
You can edit the "restricted" Security Context Constraints :
From openshift CLI :
oc edit scc restricted
And change :
runAsUser:
type: RunAsUSer
to
runAsUser:
type: RunAsAny
Note that Graham Dumpleton's answer is proper

Resources