ERROR: Named volume "xplore:/root/xPlore/rtdata:rw" is used in service "dsearch" but no declaration was found in the volumes section - linux

I wanted to create containers for tomcat, documentum content server and documentum xplore by using a single compose file. Am facing issues due to the volumes mentioned in the docker-compose.yml file. Am able to bring up the services by executing the compose files separately. Problem is when i try to merge the compose files together. Wanted to know how to run multiple containers with volumes using docker compose.
Below is the single compose file :
version: '2'
networks:
default:
external:
name: dctmcs_default
services:
dsearch:
image: xplore_ubuntu:1.6.0070.0058
container_name: dsearch
hostname: dsearch
ports:
- "9300:9300"
volumes:
- xplore:/root/xPlore/rtdata
indexagent:
image: indexagent_ubuntu:1.6.0070.0058
container_name: indexagent_1
hostname: indexagent_1
ports:
- "9200:9200"
environment:
- primary_addr=dsearch
- docbase_name=centdb
- docbase_user=dmadmin
- docbase_password=password
- broker_host=contentserver
- broker_port=1689
depends_on:
- dsearch
volumes_from:
- dsearch
volumes:
xplore: {}
tomcat_8:
image: tomcat_8.0:ccms
container_name: appserver
hostname: appserver
ports:
- "9090:8080"
contentserver:
image: contentserver_ubuntu:7.3.0000.0214
environment:
- HIGH_VOLUME_SERVER_LICENSE=
- TRUSTED_LICNESE=
- STORAGEAWARE_LICENSE=
- XMLSTORE_LICENSE=
- SNAPLOCKSTORE_LICENSE=LDNAPJEWPXQ
- RPS_LICENSE=
- FED_RECD_SERVICE_LICENSE=
- RECORD_MANAGER_LICENSE=
- PRM_LICENSE=
- ROOT_USER_PASSWORD=password
- INSTALL_OWNER_PASSWORD=password
- INSTALL_OWNER_USER=dmadmin
- REPOSITORY_PASSWORD=password
- EXTERNAL_IP=10.114.41.198
- EXTERNALDB_IP=172.17.0.1
- EXTERNALDB_ADMIN_USER=postgres
- EXTERNALDB_ADMIN_PASSWORD=password
- DB_SERVER_PORT=5432
- DOCBASE_ID=45321
- DOCBASE_NAME=centdb
- USE_EXISTING_DATABASE_ACCOUNT=false
- INDEXSPACE_NAME=dm_repo_docbase
- BOF_REGISTRY_USER_PASSWORD=password
- AEK_ALGORITHM=AES_256_CBC
- AEK_PASSPHRASE=${AEK_PASSPHRASE}
- AEK_NAME=aek.key
- ENABLE_LOCKBOX=false
- LOCKBOX_FILE_NAME=lockbox.lb
- LOCKBOX_PASSPHRASE=${LOCKBOX_PASSPHRASE}
- USE_EXISTING_AEK_LOCKBOX=false
- CONFIGURE_THUMBNAIL_SERVER=NO
- EXTDOCBROKERPORT=1689
- CONTENTSERVER_PORT=50000
- APP_SERVER_ADMIN_PASSWORD=jboss
- INSTALL_OWNER_UID=
hostname:
"contentserver"
container_name:
"contentserver"
ports:
- "1689:1689"
- "1690:1690"
- "50000:50000"
- "50001:50001"
- "9080:9080"
- "9082:9082"
- "9081:9081"
- "8081:8081"
- "8443:8443"
- "9084:9084"
volumes:
- centdb_odbc:/opt/dctm/odbc
- centdb_data:/opt/dctm/data
- centdb_dba:/opt/dctm/dba
- centdb_share:/opt/dctm/share
- centdb_dfc:/opt/dctm/config
- centdb_xhive_storage:/opt/dctm/xhive_storage
- centdb_XhiveConnector:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear
- centdb_mdserver_conf:/opt/dctm/mdserver_conf
- centdb_mdserver_log:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/log
- centdb_mdserver_logs:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs
- centdb_Thumbnail_Server_conf:/opt/dctm/product/7.3/thumbsrv/conf
- centdb_Thumbnail_Server_webinf:/opt/dctm/product/7.3/thumbsrv/container/webapps/thumbsrv/WEB-INF
privileged: true
volumes:
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:

This error often appears when you are trying to create a volume as a subfolder of your current host folder. In that case, the syntax would have to be:
volumes:
- ./centdb_odbc:/opt/dctm/odbc
In other words: The relative path "./" is missing!

When you map a directory, the source part must be either an absolute path, or a relative part that begins with ./ or ../. Otherwise, Docker interprets it as a Named Volume.
So instead of
volumes:
- xplore:/root/xPlore/rtdata
You should write:
volumes:
- ./xplore:/root/xPlore/rtdata

Volumes Command should be the last command in docker compose include volume names of all services together and run the docker compose. It will create containers.
volumes:
xplore: {}
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:

Related

Traefik: all subdirectories return 404

First, thank you in advance for taking a look. I think I have a very basic mistake somewhere, but I have searched for hours with no result. I am trying to run a proof of concept to expose a container behind a traefik 2.4 reverse proxy at a subdirectory. My DDNS does not allow for subdomains, so I am stuck with subdirectories until I can prove this works.
My problem is every container I stand up is dynamically picked up by traefik and shows up in the dashboard, but the subdirectory gives a 404 error. I have even used PathPrefix with a regex to prevent the ending / error.
Here is my configuration.
Traefik's docker-compose:
version: '3'
services:
traefik:
image: traefik:v2.4
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- t2_proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
- ./data/log:/var/log
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`domain.host.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=user:password"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`domain.host.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
fail2ban:
image: crazymax/fail2ban:latest
container_name: fail2ban
network_mode: "host"
cap_add:
- NET_ADMIN
- NET_RAW
volumes:
# - /var/log:/var/log:ro
- ./fail2ban/data:/data
- ./data/log:/var/log:ro
networks:
t2_proxy:
external: true
Here is my traefik.yml configuration file:
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
http:
acme:
email: email#email.com
storage: acme.json
httpChallenge:
entrypoint: http
log:
filePath: "/var/log/traefik.log"
level: DEBUG
accessLog:
filePath: "var/log/access.log"
filters:
statusCodes:
- "400-499"
retryAttempts: true
Here is the first proof-of-concept container I'm trying to expose. It's just portainer in a separate docker-compose:
version: '3'
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- t2_proxy
ports:
- "9000:9000"
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data:/data
labels:
- "traefik.enable=true"
#web routers
- "traefik.http.routers.portainer.entrypoints=http"
- "traefik.http.routers.portainer.rule=Host(`domain.host.com`) && PathPrefix(`/portainer`)"
#- "traefik.http.routers.portainer.rule=Host(`domain.host.com`) && PathPrefix(`/portainer{regex:$$|/.*}`)"
#- "traefik.http.routers.portainer.rule=Path(`/portainer`)"
#- "traefik.http.routers.portainer.rule=PathPrefix(`/portainer{regex:$$|/.*}`)"
#middlewares
#- "traefik.http.routers.portainer.middlewares=portainer-stripprefix"
#- "traefik.http.middlewares.portainer-stripprefix.stripprefix.prefixes=/portainer"
- "traefik.http.middlewares.portainer-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.portainer.middlewares=portainer-https-redirect"
#web secure rpiters
- "traefik.http.routers.portainer-secure.entrypoints=https"
- "traefik.http.routers.portainer-secure.rule=Host(`domain.host.com`) && PathPrefix(`/portainer`)"
#- "traefik.http.routers.portainer-secure.rule=Host(`domain.host.com`) && PathPrefix(`/portainer{regex:$$|/.*}`)"
#- "traefik.http.routers.portainer-secure.rule=Path(`/portainer`)"
#- "traefik.http.routers.portainer-secure.rule=PathPrefix(`/portainer{regex:$$|/.*}`)"
#- "traefik.http.routers.portainer-secure.middlewares=chain-basic-auth#users"
- "traefik.http.routers.portainer-secure.tls=true"
- "traefik.http.routers.portainer-secure.tls.certresolver=http"
- "traefik.http.routers.portainer-secure.service=portainer"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
- "traefik.docker.network=t2_proxy"
networks:
t2_proxy:
external: true
In summary, I navigate to domain.host.com, and it behaves properly by redirecting me to domain.host.com/dashboard. However, when I go to domain.host.com/portainer it gives a 404 error.
Please let me know if I should post any other details. I sense I am missing a very obvious bit of configuration, as this is my first time using Traefik. Thanks again for any help!
For future googlers
Alright, I figured it out tonight. Thank you, reddit.com/traefik user /u/Quafeinum for trying to help! I actually read the guide here: https://spad.uk/practical-configuration-of-traefik-as-a-reverse-proxy-for-docker/ by spad on linuxserver.io which helped me understand the labels better. The crux of the problem was
traefik.http.services.whoami-whoami.loadbalancer.server.scheme=https
Whatever that does, it was in all the examples, and I mindlessly copied it (there's a cautionary tale here). After removing it, the containers are properly exposed on HTTPS now. Verified with portainer and whoami.
Here is a link to a pastebin of the relevant docker-composes and yamls. This will get a functioning traefik that dynamically loads docker container whoami over HTTPS.
https://pastebin.com/AfBdz6Qm

Keycloak, Nodejs API and Traefik all in Docker -> only 403

maybe someone can help me.
I have keycloak, my nodejs-server, and traefik all installed with docker-compose. Everything seemed to be fine until I called a route from my frontend to the nodejs API. No matter what I tried I get a 403 all the time. When the nodejs server is running not in a docker it works. Strange in my opinion.
Here my Docker Compose if it helps:
version: '3.8'
services:
mariadb:
image: mariadb:latest
container_name: mariadb
labels:
- "traefik.enable=false"
networks:
- keycloak-network
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
command: mysqld --lower_case_table_names=1
volumes:
- ./:/docker-entrypoint-initdb.d
keycloak:
image: jboss/keycloak
container_name: keycloak
labels:
- "traefik.http.routers.keycloak.rule=Host(`keycloak.localhost`)"
- "traefik.http.routers.keycloak.tls=true"
networks:
- keycloak-network
environment:
- DB_DATABASE=
- DB_USER=
- DB_PASSWORD=
- KEYCLOAK_USER=
- KEYCLOAK_PASSWORD=
- KEYCLOAK_IMPORT=/tmp/example-realm.json
- PROXY_ADDRESS_FORWARDING=true
ports:
- 8443:8443
volumes:
- ./realm-export.json:/tmp/example-realm.json
depends_on:
- mariadb
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
labels:
- "traefik.http.routers.phpmyadmin.rule=Host(`phpmyadmin.localhost`)"
networks:
- keycloak-network
links:
- mariadb:db
ports:
- 8081:80
depends_on:
- mariadb
spectory-backend:
image: spectory-backend
container_name: spectory-backend
labels:
- "traefik.http.routers.spectory-backend.rule=Host(`api.localhost`)"
- "traefik.port=4000"
ports:
- 4000:4000
networks:
- keycloak-network
depends_on:
- mariadb
- keycloak
spectory-frontend:
image: spectory-frontend
container_name: spectory-frontend
labels:
- "traefik.http.routers.spectory-frontend.rule=Host(`spectory.localhost`)"
ports:
- 4200:80
depends_on:
- mariadb
- keycloak
- spectory-backend
traefik-reverse-proxy:
image: traefik:v2.2
command:
- --api.insecure=true
- --providers.docker
- --entrypoints.web-secure.address=:443
- --entrypoints.web.address=:80
- --providers.file.directory=/configuration/
- --providers.file.watch=true
labels:
- "traefik.http.routers.traefik-reverse-proxy.rule=Host(`traefik.localhost`)"
ports:
- "80:80"
- "443:443"
- "8082:8080"
networks:
- keycloak-network
volumes:
- ./traefik.toml:/configuration/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
- ./ssl/tls.key:/etc/https/tls.key
- ./ssl/tls.crt:/etc/https/tls.crt
networks:
keycloak-network:
name: keycloak-network
I also tried static ip addresses for nodejs and keycloak -> didn't work.
Here on StackOverflow someone mentioned using https would help -> didn't work
Pretty much my situation: Link . The goal for me is that even the API is reachable through traefik
Btw my angular frontend can communicate with keycloak. Also in a docker. I can also ping the keycloak docker from the nodejs docker. Nodejs configuration parameters directly form keycloak.
I really don't know what to do next.
Did someone tried something similar?

Node and Neo4J in docker-compose

I'm trying to run neo4J in causal cluster mode. All is meant to be run in docker with a config inside docker-compose.yml. All instances of the cluster are running, however when I'm trying to connect to neo4J thru Node.js (which of course is also run by the same docker-compose.yml) I'm getting: Neo4j :: executeQuery :: Error Neo4jError: getaddrinfo ENOTFOUND neo4j. How can I make it work, ie. connect from node to neo4J in causal cluster mode inside docker container. Here's my docker-compose.yml:
version: '3'
networks:
lan:
services:
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
volumes:
- /app/node_modules
- ./:/app
ports:
- '3000:3000'
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
core1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7474:7474
- 6477:6477
- 7687:7687
volumes:
- $HOME/neo4j/neo4j-core1/conf:/conf
- $HOME/neo4j/neo4j-core1/data:/data
- $HOME/neo4j/neo4j-core1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
core2:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7475:7475
- 6478:6478
- 7688:7688
volumes:
- $HOME/neo4j/neo4j-core2/conf:/conf
- $HOME/neo4j/neo4j-core2/data:/data
- $HOME/neo4j/neo4j-core2/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7475
- NEO4J_dbms_connector_https_listen__address=:6478
- NEO4J_dbms_connector_bolt_listen__address=:7688
core3:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7476:7476
- 6479:6479
- 7689:7689
volumes:
- $HOME/neo4j/neo4j-core3/conf:/conf
- $HOME/neo4j/neo4j-core3/data:/data
- $HOME/neo4j/neo4j-core3/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7476
- NEO4J_dbms_connector_https_listen__address=:6479
- NEO4J_dbms_connector_bolt_listen__address=:7689
read1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7477:7477
- 6480:6480
- 7690:7690
volumes:
- $HOME/neo4j/neo4j-read1/conf:/conf
- $HOME/neo4j/neo4j-read1/data:/data
- $HOME/neo4j/neo4j-read1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=READ_REPLICA
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causalClustering_initialDiscoveryMembers=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7477
- NEO4J_dbms_connector_https_listen__address=:6480
- NEO4J_dbms_connector_bolt_listen__address=:7690
And my DockerFile.dev:
FROM node:alpine
WORKDIR '/app'
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
1) The application should be attached to same network.
app:
networks:
- lan
2) Assuming the network already exists with name "lan" or create a new network
networks:
lan:
driver: bridge
3) deprecated "links" in docker-
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
instead of this use - depends_on if required to maintain loading sequence
=======================
Edit Please read comments for the followup questions

Accessing enviroment variables from a linked container

I would like to find out how to access the environment variables from a linked docker container. I would like to access the host/port in my node app from a linked rethinkdb container. Using docker compose (bluemixservice and rethinkdb):
version: '2'
services:
twitterservice:
build: ./workerTwitter
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
bluemixservice:
build: ./workerBluemix
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
ports:
- '15672:15672'
- '5672:5672'
rethinkdb:
image: rethinkdb:latest
ports:
- "8080:8080"
- "28015:28015"
volumes:
mongo-data:
driver: local
rethink-data:
driver: local
I would like to access them in my pm2 processes.json:
{
"apps": [
{
"name": "sentiment-service",
"script": "./src",
"merge_logs": true,
"max_restarts": 40,
"restart_delay": 10000,
"instances": 1,
"max_memory_restart": "200M",
"env": {
"PORT": 8080,
"NODE_ENV": "production",
"RABBIT_MQ": "amqp://rabbitlink:5672/",
"ALCHEMY_KEY": "xxxxxxx",
"RETHINK_DB_HOST": "Rethink DB Container Hostname?",
"RETHINK_DB_PORT": "Rethink DB Container Port?",
"RETHINK_DB_AUTHKEY": ""
}
}
]
}
This used to be possible (see here), but now the suggestion is to just use the linked service name as hostname, as you are already doing in your example with rabbitmq. Regarding port numbers, I don't think it adds much to use variables for that; I'd just go with the plain number. You can however parameterize the whole docker-compose.yml using variables in case you want to be able to quickly change a value from outside.
Note that there is no need to alias links, I find it much clearer to just use the service name.
Also, a links already implies depends_on.
I solved it by using consul to and registrator to detect all my containers.
version: '2'
services:
consul:
command: -server -bootstrap -advertise 192.168.99.101
image: progrium/consul:latest
ports:
- 8300:8300
- 8400:8400 # rpc/rest
- 8500:8500 # ui
- 8600:53/udp # dns
registrator:
command: -ip=192.168.99.101 consul://consul:8500
image: gliderlabs/registrator:latest
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
links:
- consul
twitterservice:
build: ./workerTwitter
container_name: twitterservice
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
- consul
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
- consul
bluemixservice:
build: ./workerBluemix
container_name: bluemixservice
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
- consul
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
- consul
mongodb:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
links:
- consul
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
ports:
- '15672:15672'
- '5672:5672'
links:
- consul
depends_on:
- consul
rethinkdb:
image: rethinkdb:latest
container_name: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
links:
- consul
depends_on:
- consul
volumes:
mongo-data:
driver: local
rethink-data:
driver: local

Docker compose build error

If I use command docker-compose build, I'll get error that looks like:
ERROR: Validation failed in file './docker-compose.yml', reason(s):
Service 'php' configuration key 'expose' '0' is invalid: should be of
the format 'PORT[/PROTOCOL]'
I use the last version docker and docker-compose.
My docker-compose.yml has the next code:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
db:
image: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: symfony
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
expose:
- 9000:9000
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- 80:80
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- 81:80
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
I use an ubuntu 14.04
Could you tell me how is fix it?
You need to put the port definitions in quotes for short ports (2 digits). This is a result of the nature of YAML and the used parser in docker-compose.
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: symfony
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
expose:
- "9000"
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- "80:80"
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- "81:80"
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
Also the expose statement should come with a single number only and also be quoted.
Added all needed changes in the above.

Resources