How run rethinkdb with nodejs on server ?
Is docker-compose.yml:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "8000:8000"
- "8080:8080"
- "28015:28015"
- "29015:29015"
links:
- "db:redis"
- "rethink:rethinkdb"
command: nodemon -L app/bin/www
db:
image: redis
ports:
- "6379"
rethink:
image: rethinkdb
is a folder app with nodejs-project and Dockerfile
Dockerfile
FROM node:0.10.38
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src/app
ADD package.json package.json
RUN npm install
ADD nodemon.json nodemon.json
In the nodejs-project connection to rethinkdb:
module.exports.setup = function() {
r.connect({host: dbConfig.host /* 127.0.0.1*/, port: dbConfig.port /*is 28015 or 29015 or 32783*/ }, function (err, connection) {
...
});
};
And when run a docker-compose up, have an error
Could not connect to 127.0.0.1:32783.
Try exposing the port in you docker-compose.yml:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "8000:8000"
depends_on:
- db
- rethink
command: nodemon -L app/bin/www
db:
image: redis
ports:
- "6379:6379"
rethink:
image: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
- "29015:29015"
Then dbConfig.host = rethinkdb and dbConfig.port = 28015:
module.exports.setup = function() {
r.connect({host: dbConfig.host /* 127.0.0.1*/, port: dbConfig.port /*is 28015 or 29015 or 32783*/ }, function (err, connection) {
...
});
};
Hope you find this helpful.
Related
I'm trying to start up my postgresql with typeorm on a node.js server and I'm getting this error: caught error # main DriverPackageNotInstalledError: Postgres package has not been found installed. Try to install it: npm install pg --save
I've checked pg is installed in my docker container
deps versions:
"pg": "^8.8.0",
"typeorm": "^0.2.34",
My docker-compose:
version: '3.7'
services:
api:
profiles: ['api', 'web']
container_name: slots-api
stdin_open: true
build:
context: ./
dockerfile: api/Dockerfile
target: dev
environment:
NODE_ENV: development
ports:
- 8000:8000
- 9229:9229 # for debugging
volumes:
- ./api:/app/api
- /app/node_modules/ # do not mount node_modules
- /app/api/node_modules/ # do not mount node_modules
depends_on:
- database
command: yarn dev:api
database:
container_name: slots-database
image: postgres:alpine
restart: unless-stopped
ports:
- 5432:5432
depends_on:
- adminer
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: dev
POSTGRES_HOST: 127.0.0.1
my dockerfile:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
COPY yarn.lock ./
RUN yarn install
FROM node:16-alpine AS dev
WORKDIR /app
COPY --from=builder /app/ /app/
COPY . .
RUN yarn build:api
EXPOSE 8000
CMD ["yarn", "dev"]
My typeorm config:
import { Connection, createConnection, DatabaseType } from 'typeorm';
import * as entities from '#/entities';
import { PostgresConnectionOptions } from 'typeorm/driver/postgres/PostgresConnectionOptions';
export const shouldCache = (): boolean => {
return !['test', 'development'].includes(process.env.NODE_ENV ?? '');
};
export default async function postgresConnection(): Promise<Connection> {
const config = {
database: 'dev',
entities: Object.values(entities),
host: '127.0.0.1',
password: 'admin',
port: 5432,
type: 'postgres' as DatabaseType,
username: 'admin',
synchronize: false,
dropSchema:
process.env.NODE_ENV !== 'production' &&
process.env.POSTGRES_DROP_SCHEMA === 'true',
migrations: ['dist/migrations/*.js'],
migrationsRun: true,
cache: shouldCache(),
} as PostgresConnectionOptions;
return await createConnection(config);
}
my server.ts file:
...
await postgresConnection().then(async () => {
console.info('Database connected!');
});
...
It works without issues running the server locally, with the database and adminer running on docker
I am running a PERN application and currently trying to dockerize it. When I run the database as a container and the server and client locally I have no issues. However, when I containerize the server, client, and database respectively, I am unable to make requests. It results in 404 errors. This is the same behavior that occurs when I pass pool the wrong port or host. So I'm wondering if somehow I am giving the wrong host and/or port to pool or if I should change it when I containerize it.
This is the Pool Instance
const Pool = require('pg').Pool
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "localhost",
port: 4000,
database: "docker"
})
This is part of the rest api in the server:
const express = require("express")
const router = express.Router()
const pool = require('../database/database.js')
router.get("/:login", async (req, res) => {
try {
let loginReq = JSON.parse(decodeURIComponent(req.params.login))
const user = await pool.query(
"SELECT user_id,first_name,last_name,email FROM \"user\" where email = $1 and password = $2",
[loginReq.email, loginReq.password]
)
if(user.rows.length) {
res.json(user.rows[0])
} else {
throw new Error('User Not Found')
}
} catch (err) {
res.status(404).json(err.message)
}
})
This is each of my dockerfiles for my client, server, and database
Client:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["npm","start"]
EXPOSE 3000
Server:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["node","index.js"]
EXPOSE 5000
Database;
FROM postgres:15.1-alpine
COPY init.sql /docker-entrypoint-initdb.d/
This is my docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:
I don't understand why the behavior would be different between containerizing the server and running it locally when they all use the same ports. I have tried messing with the host and changing it to 0.0.0.0 but that did not help. Any help would be appreciated!
I found that it was a host issue and I needed to change the host accessed in the pool to the service name. Additionally, I needed to add that the server depends on the database service.
This is the updated pg pool:
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "database",
port: 4000,
database: "docker"
})
This is the updated docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
depends_on:
- database
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:
I connected with docker redis container.The redis working in the docker.If I execute the docker file with docker exec -it 96e199a8badf sh, I connected to redis server.
My node.js application like this.I use redis 4.1.0 version.
I don't know, what's going on.How can I fix this?
docker-compose.yml
version: '3'
services:
app:
container_name: delivery-app
build:
dockerfile: 'Dockerfile'
context: .
ports:
- "3000:3000"
volumes:
- .:/app,
- '/app/node_modules'
networks:
- redis
redis:
image: "redis"
ports:
- "6379:6379"
networks:
- redis
networks:
redis:
driver: bridge
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["node","server.js"]
code:
const redisClient = redis.createClient({
socket: {
port: 6379,
host: "redis"
}
});
await redisClient.connect();
redisClient.on('connect',function(){
return res.status(200).json({
success: true
})
}).on('error',function(error){
return res.status(400).json({
success: false
})
});
package.json
"redis": "^4.1.0"
I had the same issue and solved it by passing the host parameter as a socket object in createClient like this
{
socket: {
host: "redis"
}
}
This seems to be new because before you didn't have to use the socket object
Here's the documentation for node-redis createClient
https://github.com/redis/node-redis/blob/HEAD/docs/client-configuration.md
I'm setting up a app with docker, mongodb and nodejs (mongoose), but for some reason I'm getting a authentication error.
UserNotFound: Could not find user "testuserdb" for db "admin"
Any suggestion?
Thanks!
.env
MONGO_USERNAME=testuserdb
MONGO_PASSWORD=testuserpass
MONGO_PORT=27017
MONGO_DB=testdb
db.js (mongoose connection)
const mongoose = require('mongoose');
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const options = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
connectTimeoutMS: 10000,
};
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}#${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
mongoose.connect(url, options).then( function() {
console.log('MongoDB is connected');
})
.catch( function(err) {
console.log(err);
});
docker compose
I use a docker environment variables to mongodb and nodejs to define the username, password, hostname db, and databasename
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile.dev
depends_on:
- db
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file:
- .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
#command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
command: /home/node/app/node_modules/.bin/nodemon app.js
db:
image: mongo:4.2.9-bionic
container_name: db
restart: unless-stopped
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
- MONGO_INITDB_DATABASE=$MONGO_DB
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
As per the information available. I think the user testuserdb exist on testdb and not on admin database.
And while defining connection url you have mentioned authSource=admin
Change one of these as per your needs.
I have a node app which connects to NATS server. Below is my docker-compose.yml for both the servers.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
# entrypoint: "/gnatsd -DV"
# entrypoint: "/nats-server -p 4222 -m 8888 "
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
app.post('/pets', function(req, res, next) {
let nats = NATS.connect('nats://nats-server:4222');
nats.publish('foo', 'Hello World!');
res.json({ stats: 'success' });
});
The above nodejs snippet gives me:
NatsError: Could not connect to server: Error: getaddrinfo ENOTFOUND nats-server
If I use let nats = NATS.connect() it gives me:
NatsError: Could not connect to server: Error: connect ECONNREFUSED 127.0.0.1:4222
Kindly throw me some ideas on how to resolve this issue.
Thanks.
let nats = NATS.connect('nats://nats:8222');
you need to use the name of the container, and attach the internal port