[Question posted by a user on YugabyteDB Community Slack]
I'm trying to run YugabyteDB in docker and want to create a dababase automatically as in PostgreSQL with the environment var POSTGRES_DB. Couldn’t do it with command or postgres environment.
It's not possible with YugabyteDB and that environment variable.
You can use an additional service that is used just for this job, something like:
create-db:
image: yugabytedb/yugabyte:latest
command: ysqlsh -v ON_ERROR_STOP=on -h yb-tserver-0 -e -c 'create database demo'
deploy:
restart_policy:
condition: on-failure
depends_on:
- yb-tserver-0
because you want to run this only once, even when you have many yb-tservers.
Related
This might be jumbled up so I'm going to discuss my whole set up accordingly as well as describe my goal.
I'm developing an api using Node.js/Express. I've integrated docker so that when anyone tries to run the application they'd only need to run the command docker compose up
The way the command is set up is via a dockerfile + docker-compose.yml
the docker-compose.yml contains the following
version: '3.4'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: postgres
POSTGRES_DB: mainDB
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5432:5432'
api:
image: api
depends_on:
- db
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
ports:
- 3000:3000
volumes:
- .:/usr/src/app
This sets up the docker container to have both the api and the database running together but the issue arises when I try to connect to the database.
I have an sql file that contains the following
DROP TABLE IF EXISTS fruits;
CREATE TABLE fruits (
fruitID INTEGER primary key,
fruitName VARCHAR(255) NOT NULL,
fruitColor VARCHAR(255),
);
I'm currently unable to connect to the database via pgAdmin and the error I'm getting is that the database name I'm referencing doesn't exist ("mainDB"). When I enter into psql via my terminal it also doesn't show the existence of a database either.
My overall confusion is this: how can I set this up so that when the command docker compose up is executed, the user is able to have both the api and the database running with the table created in the database as well?
Is this feasible? From what I've seen on tutorials is people having these sql files and just copy and pasting the commands into the psql command line. What would be the point of having the database running in the docker-compose.yml file if I'd have to manually build it from my psql command line?
If you are only interested in setup the database for the application when creating the container, you can use PostgreSQL's Initialization Scripts.
You can mount any number of scripts on the /docker-entrypoint-initdb.d folder that PostgreSQL will run when booting up on an empty dir.
Based on your current setup and the best practices recommended on the PostgreSQL image documentation, you could do something like this:
First, write a script that generates the user, database, and all its tables. Wrapping it like this will avoid a pervasive error when there's a failure on one of the init scripts and the container is restarted.
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
CREATE TABLE fruits (
fruitID INTEGER primary key,
fruitName VARCHAR(255) NOT NULL,
fruitColor VARCHAR(255),
);
EOSQL
Then mount this script to the appropriate folder by updating your docker-compose.yml file:
version: '3.4'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: postgres
POSTGRES_DB: mainDB
volumes:
- ./pgdata:/var/lib/postgresql/data
# Suppose the SQL file you provided was on the same folder as
# the docker-compose file.
- ./fruits.sh:/docker-entrypoint-initdb.d/001-fruits.sh
ports:
- '5432:5432'
api:
image: api
depends_on:
- db
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
ports:
- 3000:3000
volumes:
- .:/usr/src/app
PostgreSQL will run the init scripts provided in alphabetical order. So, if you need them to run on a certain sequence, you should prefix them with an order number.
You should see the schema is now available on the database.
Check the PostgreSQL image documentation site for more information:
https://hub.docker.com/_/postgres
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I am under a weird dilemma. I have created a node application and this application needs to connect to MongoDB (through docker container) I created a docker-compose file as follows:
version: "3"
services:
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
my-node:
image: "<MY_IMAGE_PATH>:latest"
deploy:
replicas: 1
restart_policy:
condition: on-failure
working_dir: /opt/app
ports:
- "2000:2000"
volumes:
- ./mod:/usr/app
networks:
- webnet
command: "node app.js"
networks:
webnet:
I am using mongo official image. I have ommited my docker image from the above configuration .I have set up with many configuration but i am unable to connect to mongoDB (yes i have changed the MongoDB uri inside the node.js application too). but whenever i am deploying my docker-compose my application on start up gives me always MongoNetworkError of TransientTransactionError. I am unable to understand where is the problem since many hours.
One more weird thing is when i running my docker-compose file i receive following logs:
Creating network server_default
Creating network server_webnet
Creating service server_mongo
Creating service server_feed-grabber
Could it be that both services are in a different network? If yes then how to fix that?
Other Info:
In node.js application MongoDB uri that i tried is
mongodb://mongo:27017/MyDB
I am running my docker-compose with the command: docker stack deploy -c docker-compose.yml server
My node.js image is Ubuntu 18
Anyone can help me with this?
Ok So i have tried few things and i figured out at last after spending many many hours. There were two things i was doing wrong and they were hitting me on last point:
First thing logging of the startup of docker showed me that it was creating two networks server_default and server_webnet this is the first mistake. Both containers should be in the same network while working.
The second thing I needed to run the Mongo container first as my Node.js application depend_on the Mongo container to be run first. This is exactly what I did in my docker-compose configuration by introducing depend_on property.
For me it was:
1- get your ip by running command
docker-machine ip
2- don't go to localhost/port, go to your ip/port, example : http://192.168.99.100:8080
Just getting started with GitLab and I can't seem to get my configuration working. I'm using the following:
image: maven:3.3-jdk-8-alpine
stages:
- prepare
- build
services:
- postgres:latest
variables:
POSTGRES_DB: my_database
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
prepare_db:
stage: prepare
image: postgres
script:
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "CREATE EXTENSION \"uuid-ossp\";"
build:
stage: build
script: mvn clean test
It works fine if I simply want to compile my code, the build would then simply be mvn clean compile but to run the tests I need a PostgreSQL instance. In my code I rely on UUIDs so I need to ensure that the uuid-ossp extension is installed.
In my prepare_db job I can connect to the Postgres instance and execute the command, I have also verified that the extension is installed properly by issuing a second script command SELECT uuid_generate_v4(); and it returns a uuid.
When the runner gets to the build job it keeps telling me that the uuid_generate_v4() function is not present. Is my prepare_db job being run on a different Postgres instance?
It is happening because prepare_db job is completely isolated from build job. So basically it doesn't prepare anything. To fix it do next steps:
.prepare_db: &prepare_db |
export PGPASSWORD=$POSTGRES_PASSWORD
psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "CREATE EXTENSION \"uuid-ossp\";"
build:
stage: build
script:
- *prepare_db
- mvn clean test
I would like to create keyspaces and column-families at the start of my Cassandra container.
I tried the following in a docker-compose.yml file:
# shortened for clarity
cassandra:
hostname: my-cassandra
image: my/cassandra:latest
command: "cqlsh -f init-database.cql"
The image my/cassandra:latest contains init-database.cql in /. But this does not seem to work.
Is there a way to make this happen ?
I was also searching for the solution to this question, and here is the way how I accomplished it.
Here the second instance of Cassandra has a volume with the schema.cql and runs CQLSH command
My Version with healthcheck so we can get rid of sleep command
version: '2.2'
services:
cassandra:
image: cassandra:3.11.2
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:3.11.2
depends_on:
cassandra:
condition: service_healthy
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
NetFlix Version using sleep
version: '3.5'
services:
cassandra:
image: cassandra:latest
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:latest
depends_on:
- cassandra
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "sleep 60 && echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
P.S I found this way at one of the Netflix Repos
We recently tried to solve a similar problem in KillrVideo, a reference application for Cassandra. We are using Docker Compose to spin up the environment needed by the application which includes a DataStax Enterprise (i.e. Cassandra) node. We wanted that node to do some bootstrapping the first time it was started to install the CQL schema (using cqlsh to run the statements in a .cql file just like you're trying to do). Basically the approach we took was to write a shell script for our Docker entrypoint that:
Starts the node normally but in the background.
Waits until port 9042 is available (this is where clients connect to run CQL statements).
Uses cqlsh -f to run some CQL statements and init the schema.
Stops the node that's running in the background.
Continues on to the usual entrypoint for our Docker image that starts up the node normally (in the foreground like Docker expects).
We just use the existence of a file to indicate whether the node has already been bootstrapped and check that on startup to determine whether we need to do that logic above or can just start it normally. You can see the results in the killrvideo-dse-docker repository on GitHub.
There is one caveat to this approach. This worked great for us because in our reference application, we're only spinning up a single node (i.e. we aren't creating a cluster with more than one node). If you're running multiple nodes, you'll probably want to make sure that only one of the nodes does the bootstrapping to create the schema because multiple clients modifying the schema simultaneously can cause some issues with your cluster. (This is a known issue and will hopefully be fixed at some point.)
I solved this problem by patching cassandra's docker-entrypoint.sh so it will execute sh and cql files located in /docker-entrypoint-initdb.d on startup. This is similar to how MySQL docker containers work.
Basically, I add a small script at the end of the docker-entrypoint.sh (right before the last line, exec "$#"), that will run the cql scripts once cassandra is up. A simplified version is:
INIT_DIR=docker-entrypoint-initdb.d
# this whole block will execute in the background
(
cd $INIT_DIR
# wait for cassandra to be ready
while ! cqlsh -e 'describe cluster' > /dev/null 2>&1; do sleep 6; done
echo "$0: Cassandra cluster ready: executing cql scripts found in $INIT_DIR"
# find and execute cql scripts, in name order
for f in $(find . -type f -name "*.cql" -print | sort); do
echo "$0: running $f"
cqlsh -f "$f"
echo "$0: $f executed"
done
) &
This solution works for all cassandra versions (at least until 3.11, as the time of writing).
Hence, you only have to build and use this cassandra image version, and then add proper initializations scripts to the container using docker-compose volumes.
A complete gist with a more robust entrypoint patch (and example) is available here.