Connecting YugabyteDB to nakama in docker-compose - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
I have setup YugabyteDB and nakama (server for realtime social and web & mobile game apps) docker containers. Both work and I can query yugabytedb through DBeaver and now i wanted it to make connection with nakama server.
In nakama compose they have given example for cockroachdb and postgresql but not for yugabyte specifically.
Here i have mentioned docker compose where they have given database configuration setup for postgresql. I want some help for yugabyte configuration:
version: '3'
services:
postgres:
container_name: postgres
image: postgres:9.6-alpine
environment:
- POSTGRES_DB=nakama
- POSTGRES_PASSWORD=localdb
volumes:
- data:/var/lib/postgresql/data
expose:
- "8080"
- "5432"
ports:
- "5432:5432"
- "8080:8080"
nakama:
container_name: nakama
image: heroiclabs/nakama:3.12.0
entrypoint:
- "/bin/sh"
- "-ecx"
- >
/nakama/nakama migrate up --database.address postgres:localdb#postgres:5432/nakama &&
exec /nakama/nakama --name nakama1 --database.address postgres:localdb#postgres:5432/nakama --logger.level DEBUG --session.token_expiry_sec 7200
restart: always
links:
- "postgres:db"
depends_on:
- postgres
volumes:
- ./:/nakama/data
expose:
- "7349"
- "7350"
- "7351"
ports:
- "7349:7349"
- "7350:7350"
- "7351:7351"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7350/"]
interval: 10s
timeout: 5s
retries: 5
volumes:
data:

First, remove the postgres container and add the following to that start a single-node cluster (one container is for a tserver and another is for a master). You can bump up the yugabytedb version:
yb-master:
image: yugabytedb/yugabyte:2.12.1.0-b41
container_name: yb-master-n1
volumes:
- yb-master-data-1:/mnt/master
command: [ "/home/yugabyte/bin/yb-master",
"--fs_data_dirs=/mnt/master",
"--webserver_port=7001",
"--master_addresses=yb-master-n1:7100",
"--rpc_bind_addresses=yb-master-n1:7100",
"--replication_factor=1",
"--ysql_num_shards_per_tserver=1"]
ports:
- "7001:7001"
environment:
SERVICE_7001_NAME: yb-master
yb-tserver:
image: yugabytedb/yugabyte:2.12.1.0-b41
container_name: yb-tserver-n1
volumes:
- yb-tserver-data-1:/mnt/tserver
- ./hasura/seeds:/mnt/seed_data
command: [ "/home/yugabyte/bin/yb-tserver",
"--fs_data_dirs=/mnt/tserver",
"--webserver_port=9001",
"--start_pgsql_proxy",
"--rpc_bind_addresses=yb-tserver-n1:9100",
"--tserver_master_addrs=yb-master-n1:7100",
"--ysql_num_shards_per_tserver=1"]
ports:
- "5433:5433"
- "9001:9001"
environment:
SERVICE_5433_NAME: ysql
SERVICE_9001_NAME: yb-tserver
depends_on:
- yb-master
Second, replace the occurences of the postgres connection string with the following:
postgresql://yugabyte:yugabyte#yb-tserver:5433/yugabyte
This should work for you.

Related

Unable to Authenticate Mongo-exporess on Mongo db with Docker Compose

I m trying to connect mongo express and my node app to mongo db with docker compose but I' m unable to authenticate.
Can not connect those service with docker-compose, continusly getting this error:
Could not connect to database using connectionString: mongodb://admin:password#mongo:27017"
The following is my docker-compose.yml
version: '3'
services:
app:
build: .
env_file: .env
restart: always
environment:
- NODE_ENV=production
ports:
- "8080:8080"
volumes:
- .:/usr/src/app
command: yarn dev
depends_on:
- mongodb
networks:
- mongo-compose-network
links:
- mongodb
mongodb:
container_name: mongodb
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- "27017:27017"
volumes:
- mongodbdata:/data/db
networks:
- mongo-compose-network
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- "8888:8081"
links:
- mongodb
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_URL=mongodb://admin:password#mongo:27017
networks:
- mongo-compose-network
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- mongo-compose-network
volumes:
- redisdata:/data
celery:
build: .
command: node celery.js worker --loglevel=info
depends_on:
- app
- mongodb
- redis
networks:
- mongo-compose-network
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
volumes:
mongodbdata:
redisdata:
networks:
mongo-compose-network:
driver: bridge
Resolved by creating an admin user from inside the mongodb container.

Docker Cassandra Access in Client Running Under Same Docker

My docker-compose file as below,
cassandra-db:
container_name: cassandra-db
image: cassandra:4.0-beta1
ports:
- "9042:9042"
restart: on-failure
volumes:
- ./out/cassandra_data:/var/lib/cassandra
environment:
- CASSANDRA_CLUSTER_NAME='cassandra-cluster'
- CASSANDRA_NUM_TOKENS=256
- CASSANDRA_RPC_ADDRESS=0.0.0.0
networks:
- my-network
client-service:
container_name: client-service
image: client-service
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- 8087:8087
links:
- cassandra-db
networks:
- my-network
networks:
my-network:
I use Datastax Java driver to connect cassandra in client service, which also runs inside docker.
CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(
InetSocketAddress.createUnresolved("cassandra-db",9042)))
.withKeyspace(CassandraConstant.KEY_SPACE_NAME.getValue())
.build()
I use DNS name to connect but not connected, i tried with Docker IP of cassandra container, and depends-on also.
Any issue with docker-compose file?

Make multiple Node js dockerized app communcate to mongodb docker

I've built two NEST app which shares a common database. I've been trying to figure out a way to make these two apps to get data from the Mongo db container. I'm stuck with this process.
docker-compose.yml with app 1
version: '3'
services:
backend:
image: 'example-image_1:1.0.0'
working_dir: /app/example-app-1/backend/example-app-1-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_1
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
network_mode: "host"
ports:
- '3333:3333'
command: ['node', 'main.js']
depends_on:
- mongodb
expose:
- 3333
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_1"'
ports:
- '27017:27017'
expose:
- 27017
docker-compose.yml with app-2
version: '3'
services:
backend:
image: 'example_app_2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_2
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
expose:
- 8888
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_2"'
ports:
- '27017:27017'
expose:
- 27017
I need help in making these app communicate with common container - mongodb
Click link for Architecture Setup
Since you're using 2 different docker-compose.yml you're actually running 2 backend and 2 mongodb on 2 docker-networks
One of the 2 mongo won't start cause the port is already occupied.
Option 1 (nicer):
services:
backend_1:
...
ports:
- '8888:8888'
backend_2:
...
ports:
- '8899:8899'
mongodb:
ports:
- '27017:27017'
This setup provides 3 container on the same network.
Now you can access at mongo from both backends at <mongo_ip>:27017
Option 2 (ugly):
services:
backend:
...
ports:
- '8888:8888'
mongodb:
ports:
- '27017:27017'
And in another docker-compose
services:
backend:
...
ports:
- '8888:8888'
This setup provides 3 container on 2 different network.
In this setup each docker-compose.yml has it's own docker network, so from the second backend service you have to connect to another docker network to access the container.

Configuring firewall settings for Docker Swarm on Digital Ocean

How can I configure my digital ocean boxes to have the correct firewall settings?
I've followed the official guide for getting Digital Ocean and Docker containers working together.
I have 3 docker nodes that I can see when I docker-machine ls. I have created a master docker node and have joined the other docker nodes as workers. However, if I attempt to visit the url of the node, the connection hangs. This setup is working on local.
Here is my docker-compose that I am using for production.
version: "3"
services:
api:
image: "api"
command: rails server -b "0.0.0.0" -e production
depends_on:
- db
- redis
deploy:
replicas: 3
resources:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
ports:
- "3000:3000"
client:
image: "client"
depends_on:
- api
deploy:
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
- clientnet
ports:
- "4200:4200"
- "35730:35730"
db:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
env_file: .env-prod
image: mysql
ports:
- "3306:3306"
volumes:
- ~/.docker-volumes/app/mysql/data:/var/lib/mysql/data
redis:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
image: redis:alpine
ports:
- "6379:6379"
volumes:
- ~/.docker-volumes/app/redis/data:/var/lib/redis/data
nginx:
image: app_nginx
deploy:
restart_policy:
condition: on-failure
env_file: .env-prod
depends_on:
- client
- api
networks:
- apinet
- clientnet
ports:
- "80:80"
networks:
apinet:
driver: overlay
clientnet:
driver: overlay
I'm pretty confident that the problem is with the firewall settings. I'm not sure however, the ports that need to be open though. I've consulted this guide.

Accessing enviroment variables from a linked container

I would like to find out how to access the environment variables from a linked docker container. I would like to access the host/port in my node app from a linked rethinkdb container. Using docker compose (bluemixservice and rethinkdb):
version: '2'
services:
twitterservice:
build: ./workerTwitter
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
bluemixservice:
build: ./workerBluemix
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
ports:
- '15672:15672'
- '5672:5672'
rethinkdb:
image: rethinkdb:latest
ports:
- "8080:8080"
- "28015:28015"
volumes:
mongo-data:
driver: local
rethink-data:
driver: local
I would like to access them in my pm2 processes.json:
{
"apps": [
{
"name": "sentiment-service",
"script": "./src",
"merge_logs": true,
"max_restarts": 40,
"restart_delay": 10000,
"instances": 1,
"max_memory_restart": "200M",
"env": {
"PORT": 8080,
"NODE_ENV": "production",
"RABBIT_MQ": "amqp://rabbitlink:5672/",
"ALCHEMY_KEY": "xxxxxxx",
"RETHINK_DB_HOST": "Rethink DB Container Hostname?",
"RETHINK_DB_PORT": "Rethink DB Container Port?",
"RETHINK_DB_AUTHKEY": ""
}
}
]
}
This used to be possible (see here), but now the suggestion is to just use the linked service name as hostname, as you are already doing in your example with rabbitmq. Regarding port numbers, I don't think it adds much to use variables for that; I'd just go with the plain number. You can however parameterize the whole docker-compose.yml using variables in case you want to be able to quickly change a value from outside.
Note that there is no need to alias links, I find it much clearer to just use the service name.
Also, a links already implies depends_on.
I solved it by using consul to and registrator to detect all my containers.
version: '2'
services:
consul:
command: -server -bootstrap -advertise 192.168.99.101
image: progrium/consul:latest
ports:
- 8300:8300
- 8400:8400 # rpc/rest
- 8500:8500 # ui
- 8600:53/udp # dns
registrator:
command: -ip=192.168.99.101 consul://consul:8500
image: gliderlabs/registrator:latest
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
links:
- consul
twitterservice:
build: ./workerTwitter
container_name: twitterservice
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
- consul
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
- consul
bluemixservice:
build: ./workerBluemix
container_name: bluemixservice
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
- consul
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
- consul
mongodb:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
links:
- consul
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
ports:
- '15672:15672'
- '5672:5672'
links:
- consul
depends_on:
- consul
rethinkdb:
image: rethinkdb:latest
container_name: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
links:
- consul
depends_on:
- consul
volumes:
mongo-data:
driver: local
rethink-data:
driver: local

Resources