Add data persistence in hyperledger fabric - couchdb

I have built a hyperledger fabric network. The below is the configuration right now in my docker-compose.yaml file.
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:latest
environment:
- GODEBUG=netdns=go
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_example
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org1.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 7051:7051
- 7053:7053
depends_on:
- couchdb0
networks:
- example
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- "5984:5984"
networks:
- example
I have missed on adding the configuration for data persistence.
I am following this documentation link to add data persistence.
So after I add below line in fabric-couchdb it will use the host machine's specified file system for storing data.
volumes:
- /var/hyperledger/couchdb0:/opt/couchdb/data
But the thing that I am not able to figure out is how do we retrieve the current data that is present in the network now. Where does fabric-couchdb stores data by default? Can we not copy the old data from default folder location to new folder location?

If you define the CouchDB volume as you say, your host folder should be in /var/hyperledger/couchdb0.
You can always run docker volume ls and docker volume inspect your_volume_name to check the mount point of your volumes.
If you have not defined volumes for your container and you want to retrieve the folder (I think that's your problem), then try:
docker cp couchdb0:/opt/couchdb/data your-host-destination-folder

Related

Docker Cassandra Access in Client Running Under Same Docker

My docker-compose file as below,
cassandra-db:
container_name: cassandra-db
image: cassandra:4.0-beta1
ports:
- "9042:9042"
restart: on-failure
volumes:
- ./out/cassandra_data:/var/lib/cassandra
environment:
- CASSANDRA_CLUSTER_NAME='cassandra-cluster'
- CASSANDRA_NUM_TOKENS=256
- CASSANDRA_RPC_ADDRESS=0.0.0.0
networks:
- my-network
client-service:
container_name: client-service
image: client-service
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- 8087:8087
links:
- cassandra-db
networks:
- my-network
networks:
my-network:
I use Datastax Java driver to connect cassandra in client service, which also runs inside docker.
CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(
InetSocketAddress.createUnresolved("cassandra-db",9042)))
.withKeyspace(CassandraConstant.KEY_SPACE_NAME.getValue())
.build()
I use DNS name to connect but not connected, i tried with Docker IP of cassandra container, and depends-on also.
Any issue with docker-compose file?

Keycloak, Nodejs API and Traefik all in Docker -> only 403

maybe someone can help me.
I have keycloak, my nodejs-server, and traefik all installed with docker-compose. Everything seemed to be fine until I called a route from my frontend to the nodejs API. No matter what I tried I get a 403 all the time. When the nodejs server is running not in a docker it works. Strange in my opinion.
Here my Docker Compose if it helps:
version: '3.8'
services:
mariadb:
image: mariadb:latest
container_name: mariadb
labels:
- "traefik.enable=false"
networks:
- keycloak-network
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
command: mysqld --lower_case_table_names=1
volumes:
- ./:/docker-entrypoint-initdb.d
keycloak:
image: jboss/keycloak
container_name: keycloak
labels:
- "traefik.http.routers.keycloak.rule=Host(`keycloak.localhost`)"
- "traefik.http.routers.keycloak.tls=true"
networks:
- keycloak-network
environment:
- DB_DATABASE=
- DB_USER=
- DB_PASSWORD=
- KEYCLOAK_USER=
- KEYCLOAK_PASSWORD=
- KEYCLOAK_IMPORT=/tmp/example-realm.json
- PROXY_ADDRESS_FORWARDING=true
ports:
- 8443:8443
volumes:
- ./realm-export.json:/tmp/example-realm.json
depends_on:
- mariadb
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
labels:
- "traefik.http.routers.phpmyadmin.rule=Host(`phpmyadmin.localhost`)"
networks:
- keycloak-network
links:
- mariadb:db
ports:
- 8081:80
depends_on:
- mariadb
spectory-backend:
image: spectory-backend
container_name: spectory-backend
labels:
- "traefik.http.routers.spectory-backend.rule=Host(`api.localhost`)"
- "traefik.port=4000"
ports:
- 4000:4000
networks:
- keycloak-network
depends_on:
- mariadb
- keycloak
spectory-frontend:
image: spectory-frontend
container_name: spectory-frontend
labels:
- "traefik.http.routers.spectory-frontend.rule=Host(`spectory.localhost`)"
ports:
- 4200:80
depends_on:
- mariadb
- keycloak
- spectory-backend
traefik-reverse-proxy:
image: traefik:v2.2
command:
- --api.insecure=true
- --providers.docker
- --entrypoints.web-secure.address=:443
- --entrypoints.web.address=:80
- --providers.file.directory=/configuration/
- --providers.file.watch=true
labels:
- "traefik.http.routers.traefik-reverse-proxy.rule=Host(`traefik.localhost`)"
ports:
- "80:80"
- "443:443"
- "8082:8080"
networks:
- keycloak-network
volumes:
- ./traefik.toml:/configuration/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
- ./ssl/tls.key:/etc/https/tls.key
- ./ssl/tls.crt:/etc/https/tls.crt
networks:
keycloak-network:
name: keycloak-network
I also tried static ip addresses for nodejs and keycloak -> didn't work.
Here on StackOverflow someone mentioned using https would help -> didn't work
Pretty much my situation: Link . The goal for me is that even the API is reachable through traefik
Btw my angular frontend can communicate with keycloak. Also in a docker. I can also ping the keycloak docker from the nodejs docker. Nodejs configuration parameters directly form keycloak.
I really don't know what to do next.
Did someone tried something similar?

Node and Neo4J in docker-compose

I'm trying to run neo4J in causal cluster mode. All is meant to be run in docker with a config inside docker-compose.yml. All instances of the cluster are running, however when I'm trying to connect to neo4J thru Node.js (which of course is also run by the same docker-compose.yml) I'm getting: Neo4j :: executeQuery :: Error Neo4jError: getaddrinfo ENOTFOUND neo4j. How can I make it work, ie. connect from node to neo4J in causal cluster mode inside docker container. Here's my docker-compose.yml:
version: '3'
networks:
lan:
services:
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
volumes:
- /app/node_modules
- ./:/app
ports:
- '3000:3000'
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
core1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7474:7474
- 6477:6477
- 7687:7687
volumes:
- $HOME/neo4j/neo4j-core1/conf:/conf
- $HOME/neo4j/neo4j-core1/data:/data
- $HOME/neo4j/neo4j-core1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
core2:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7475:7475
- 6478:6478
- 7688:7688
volumes:
- $HOME/neo4j/neo4j-core2/conf:/conf
- $HOME/neo4j/neo4j-core2/data:/data
- $HOME/neo4j/neo4j-core2/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7475
- NEO4J_dbms_connector_https_listen__address=:6478
- NEO4J_dbms_connector_bolt_listen__address=:7688
core3:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7476:7476
- 6479:6479
- 7689:7689
volumes:
- $HOME/neo4j/neo4j-core3/conf:/conf
- $HOME/neo4j/neo4j-core3/data:/data
- $HOME/neo4j/neo4j-core3/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7476
- NEO4J_dbms_connector_https_listen__address=:6479
- NEO4J_dbms_connector_bolt_listen__address=:7689
read1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7477:7477
- 6480:6480
- 7690:7690
volumes:
- $HOME/neo4j/neo4j-read1/conf:/conf
- $HOME/neo4j/neo4j-read1/data:/data
- $HOME/neo4j/neo4j-read1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=READ_REPLICA
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causalClustering_initialDiscoveryMembers=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7477
- NEO4J_dbms_connector_https_listen__address=:6480
- NEO4J_dbms_connector_bolt_listen__address=:7690
And my DockerFile.dev:
FROM node:alpine
WORKDIR '/app'
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
1) The application should be attached to same network.
app:
networks:
- lan
2) Assuming the network already exists with name "lan" or create a new network
networks:
lan:
driver: bridge
3) deprecated "links" in docker-
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
instead of this use - depends_on if required to maintain loading sequence
=======================
Edit Please read comments for the followup questions

How to fix the error when trying to bring up first-network

I am working with Hyperledger Fabric 1.3.0. I get the following error when I execute the "byfn.sh -m up" in the fabric-samples/first-network.
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] Y
proceeding ...
LOCAL_VERSION=1.3.0
DOCKER_IMAGE_VERSION=1.3.0
Error: No such container: cli
ERROR !!!! Test failed
Please help
I don't have docker-compose.yaml but what I do have is docker-compose-cli.yaml. The contents are below:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
networks:
byfn:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
networks:
- byfn
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
networks:
- byfn
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:`enter code here`
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
networks:
- byfn
Well, it seems you are having some issues with your versions, what I recommend is to clean all the containers and images of docker running:
docker rmi $(docker images -a -q) //for images
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q) //for containers
After that, re-download the fabric samples, I think that they updated the 1.3.0 stable version yesterday, setup again your crytogen path (this is very important if you are pointing to an old version of the cryptogen tool it won't work!
And give a try again, if that don't work, I recommend you to give us more information like.
Where are you running your First-Network? Windows? Mac? Linux?
Version of Linux? Version of docker?
If you are still having troubles you can check out my guide of how yo setup an hyperledger fabric from scratch using the Basic-Network example, it's kinda easy and explain all the concepts that you need.
Setup Hyperledger Fabric in multiple physical machines
Update
Since you are in Windows, don't use your users Folder, create a simple folder structure like C:/HLF for example.
After that in your .env file add this line COMPOSE_CONVERT_WINDOWS_PATHS=1.
This helps docker to understand the windows paths, cause they are different in linux.
Update #2
Let's try another solution then, go to your script.sh inside your script folder, look for the command peer channel create... and add this line just before the IF statement where are them MSYS_NO_PATHCONV=1
Review the Windows Extras section in the fabric solution, check that you have all installed.
Hyperledger Fabric - Windows Extras
After that re generate everything. Run the docker commands in my first answer and add this.
docker network prune
After that.
./byfn.sh down
./byfn.sh generate
./byfn.sh -m up
Update #3
I tested the fisrt-network using Windows 10 and Docker for windows (Using linux containers) with the configurations I mentioned before and it's working fine.
My docker version is: 18.06.1-ce
And I followed the Fabric Hyperledger official tutorial: Build Network
The only difference that I saw it's that I ran ./byfn.sh up instead of ./byfn.sh -m up
I recommend you to reinstall your docker for windows, maybe its something corrupted, that doesn't allow you to start a your network.
Hope that it helps!

ERROR: Named volume "xplore:/root/xPlore/rtdata:rw" is used in service "dsearch" but no declaration was found in the volumes section

I wanted to create containers for tomcat, documentum content server and documentum xplore by using a single compose file. Am facing issues due to the volumes mentioned in the docker-compose.yml file. Am able to bring up the services by executing the compose files separately. Problem is when i try to merge the compose files together. Wanted to know how to run multiple containers with volumes using docker compose.
Below is the single compose file :
version: '2'
networks:
default:
external:
name: dctmcs_default
services:
dsearch:
image: xplore_ubuntu:1.6.0070.0058
container_name: dsearch
hostname: dsearch
ports:
- "9300:9300"
volumes:
- xplore:/root/xPlore/rtdata
indexagent:
image: indexagent_ubuntu:1.6.0070.0058
container_name: indexagent_1
hostname: indexagent_1
ports:
- "9200:9200"
environment:
- primary_addr=dsearch
- docbase_name=centdb
- docbase_user=dmadmin
- docbase_password=password
- broker_host=contentserver
- broker_port=1689
depends_on:
- dsearch
volumes_from:
- dsearch
volumes:
xplore: {}
tomcat_8:
image: tomcat_8.0:ccms
container_name: appserver
hostname: appserver
ports:
- "9090:8080"
contentserver:
image: contentserver_ubuntu:7.3.0000.0214
environment:
- HIGH_VOLUME_SERVER_LICENSE=
- TRUSTED_LICNESE=
- STORAGEAWARE_LICENSE=
- XMLSTORE_LICENSE=
- SNAPLOCKSTORE_LICENSE=LDNAPJEWPXQ
- RPS_LICENSE=
- FED_RECD_SERVICE_LICENSE=
- RECORD_MANAGER_LICENSE=
- PRM_LICENSE=
- ROOT_USER_PASSWORD=password
- INSTALL_OWNER_PASSWORD=password
- INSTALL_OWNER_USER=dmadmin
- REPOSITORY_PASSWORD=password
- EXTERNAL_IP=10.114.41.198
- EXTERNALDB_IP=172.17.0.1
- EXTERNALDB_ADMIN_USER=postgres
- EXTERNALDB_ADMIN_PASSWORD=password
- DB_SERVER_PORT=5432
- DOCBASE_ID=45321
- DOCBASE_NAME=centdb
- USE_EXISTING_DATABASE_ACCOUNT=false
- INDEXSPACE_NAME=dm_repo_docbase
- BOF_REGISTRY_USER_PASSWORD=password
- AEK_ALGORITHM=AES_256_CBC
- AEK_PASSPHRASE=${AEK_PASSPHRASE}
- AEK_NAME=aek.key
- ENABLE_LOCKBOX=false
- LOCKBOX_FILE_NAME=lockbox.lb
- LOCKBOX_PASSPHRASE=${LOCKBOX_PASSPHRASE}
- USE_EXISTING_AEK_LOCKBOX=false
- CONFIGURE_THUMBNAIL_SERVER=NO
- EXTDOCBROKERPORT=1689
- CONTENTSERVER_PORT=50000
- APP_SERVER_ADMIN_PASSWORD=jboss
- INSTALL_OWNER_UID=
hostname:
"contentserver"
container_name:
"contentserver"
ports:
- "1689:1689"
- "1690:1690"
- "50000:50000"
- "50001:50001"
- "9080:9080"
- "9082:9082"
- "9081:9081"
- "8081:8081"
- "8443:8443"
- "9084:9084"
volumes:
- centdb_odbc:/opt/dctm/odbc
- centdb_data:/opt/dctm/data
- centdb_dba:/opt/dctm/dba
- centdb_share:/opt/dctm/share
- centdb_dfc:/opt/dctm/config
- centdb_xhive_storage:/opt/dctm/xhive_storage
- centdb_XhiveConnector:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear
- centdb_mdserver_conf:/opt/dctm/mdserver_conf
- centdb_mdserver_log:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/log
- centdb_mdserver_logs:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs
- centdb_Thumbnail_Server_conf:/opt/dctm/product/7.3/thumbsrv/conf
- centdb_Thumbnail_Server_webinf:/opt/dctm/product/7.3/thumbsrv/container/webapps/thumbsrv/WEB-INF
privileged: true
volumes:
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:
This error often appears when you are trying to create a volume as a subfolder of your current host folder. In that case, the syntax would have to be:
volumes:
- ./centdb_odbc:/opt/dctm/odbc
In other words: The relative path "./" is missing!
When you map a directory, the source part must be either an absolute path, or a relative part that begins with ./ or ../. Otherwise, Docker interprets it as a Named Volume.
So instead of
volumes:
- xplore:/root/xPlore/rtdata
You should write:
volumes:
- ./xplore:/root/xPlore/rtdata
Volumes Command should be the last command in docker compose include volume names of all services together and run the docker compose. It will create containers.
volumes:
xplore: {}
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:

Resources