Gitlab 'Gateway Timeout' behind traefik proxy - gitlab

So I'm trying to set up a gitlab-ce instance on docker swarm using traefik as reverse proxy.
This is my proxy stack;
version: '3'
services:
traefik:
image: traefik:alpine
command: --entryPoints="Name:http Address::80 Redirect.EntryPoint:https" --entryPoints="Name:https Address::443 TLS" --defaultentrypoints="http,https" --acme --acme.acmelogging="true" --acme.email="freelyformd#gmail.com" --acme.entrypoint="https" --acme.storage="acme.json" --acme.onhostrule="true" --docker --docker.swarmmode --docker.domain="mydomain.com" --docker.watch --web
ports:
- 80:80
- 443:443
- 8080:8080
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-net:
external: true
And my gitlab stack
version: '3'
services:
omnibus:
image: 'gitlab/gitlab-ce:latest'
hostname: 'lab.mydomain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://lab.mydomain.com'
nginx['listen_port'] = 80
nginx['listen_https'] = false
registry_external_url 'https://registry.mydomain.com'
registry_nginx['listen_port'] = 80
registry_nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 2222
gitlab_rails['gitlab_email_from'] = 'lab#mydomain.com'
gitlab_rails['gitlab_email_reply_to'] = 'lab#mydomain.com'
ports:
- 2222:22
volumes:
- gitlab_config:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
networks:
- traefik-net
deploy:
labels:
traefik.enable: "port"
traefik.frontend.rule: 'Host: lab.mydomain.com, Host: registry.mydomain.com'
traefik.port: 80
placement:
constraints:
- node.role == manager
runner:
image: 'gitlab/gitlab-runner:v1.11.4'
volumes:
- gitlab_runner_config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
volumes:
gitlab_config:
gitlab_logs:
gitlab_data:
gitlab_runner_config:
networks:
traefik-net:
external: true
traefik-net is an overlay network
So when I deploy using docker stack deploy and visit lab.mydomain.com, i get the Gateway Timeout error. When I execute curl localhost within the gitlab container, it seems to work fine. Not sure what the problem is, any pointers would be appreciated

Turns out all I had to do was set the traefik label, traefik.docker.network to traefik-net, see https://github.com/containous/traefik/issues/1254

Related

could not translate host name to address (Data lineage- tokern)

version: '3.6'
services:
tokern-demo-catalog:
image: tokern/demo-catalog:latest
container_name: tokern-demo-catalog
restart: unless-stopped
networks:
- tokern-internal
volumes:
- tokern_demo_catalog_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: xxx
POSTGRES_USER: xxx
POSTGRES_DB: table1
tokern-api:
image: tokern/data-lineage:latest
container_name: tokern-data-lineage
restart: unless-stopped
networks:
- tokern-internal
environment:
CATALOG_PASSWORD: xxx
CATALOG_USER: xxx
CATALOG_DB: table1
CATALOG_HOST: "xxxxxxxx.amazon.com"
GUNICORN_CMD_ARGS: "--bind 0.0.0.0:4142"
toker-viz:
image: tokern/data-lineage-viz:latest
container_name: tokern-data-lineage-visualizer
restart: unless-stopped
networks:
- tokern-internal
- tokern-net
ports:
- "39284:80"
networks:
tokern-net: # Exposed by your host.
# external: true
name: "tokern-net"
driver: bridge
ipam:
driver: default
config:
- subnet: 10.10.0.0/24
tokern-internal:
name: "tokern-internal"
driver: bridge
internal: true
ipam:
driver: default
config:
- subnet: 10.11.0.0/24
volumes:
tokern_demo_catalog_data:
trying to implement data lineage into my database
i have followed according to this documentation "https://pypi.org/project/data-lineage/" and https://tokern.io/docs/data-lineage/installation/
not able to solve this error
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "xxx.amazonaws.com" to address: Temporary failure in name resolution

Docker-Compose.yml with GITLAB_OMNIBUS_CONFIG not working

Sorry if this is a duplicate question––I found similar issues but none seemed to be my exact use case... If I missed something mentioning a link would be highly appreciated.
I am trying to compose a docker stack with frontproxy, acme-companion and gitlab.
Currently, I am using a setup with several docker-compose.yml files for frontproxy and gitlab, in separate directories––which is working, without acme-companion.
My attempt to integrate it all into one file fails so far; obviously I am messing up the GITLAB_OMNIBUS_CONFIG configs––I just don't understand where my error is.
version: '3.1'
services:
frontproxy:
restart: always
image: jwilder/nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "certs-volume:/etc/nginx/certs:ro"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
nginx-letsencrypt-companion:
restart: always
image: nginxproxy/acme-companion
volumes:
- "certs-volume:/etc/nginx/certs"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
gitlab:
image: gitlab/gitlab-ce:latest
restart: always
hostname: 'dev.redacted.com'
environment:
VIRTUAL_HOST: 'dev.redacted.com'
LETSENCRYPT_HOST: 'dev.redacted.com'
LETSENCRYPT_EMAIL: 'splash#redacted.com'
VIRTUAL_PROTO: 'https'
VIRTUAL_PORT: '443'
CERT_NAME: 'redacted.com'
GITLAB_OMNIBUS_CONFIG: |
# Email setup
gitlab_rails['gitlab_email_enabled'] = true
gitlab_rails['gitlab_email_from'] = 'admin#redacted.com'
gitlab_rails['gitlab_email_display_name'] = 'Gitlab#redacted.com'
gitlab_rails['gitlab_email_reply_to'] = 'admin#redacted.com'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = 'mail.redacted.com'
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = 'admin#redacted.com'
gitlab_rails['smtp_password'] = 'redacted'
gitlab_rails['smtp_domain'] = 'redacted.com'
gitlab_rails['smtp_authentication'] = 'login'
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['gitlab_root_email'] = 'admin#redacted.com'
# HTTPS Setup
letsencrypt['enable'] = false
external_url 'https://dev.redacted.com'
gitlab_rails['gitlab_https'] = true
gitlab_rails['gitlab_port'] = 443
ports:
- '22:22'
volumes:
- ./config:/etc/gitlab
- ./logs:/var/log/gitlab
- ./data:/var/opt/gitlab
volumes:
certs-volume:
Edit:
I had not specified the error I was seeing–thanks for pointing it out, #sytech!
So, here's the exact error message, when trying to start the stack with docker-compose up -d:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 29, column 7
expected <block end>, but found '<scalar>'
in "./docker-compose.yml", line 38, column 9
Although I have not been able to figure out the specific problem I had with the docker-compose.yml with version 3.1. I managed to compose one that works now for me though––perhaps it's useful to others as well:
version: '2.1'
services:
frontproxy:
restart: always
image: jwilder/nginx-proxy
labels:
com.github.nginxproxy.acme-companion.frontproxy: true
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "certs-volume:/etc/nginx/certs:ro"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
nginx-letsencrypt-companion:
restart: always
image: nginxproxy/acme-companion
volumes:
- "certs-volume:/etc/nginx/certs"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
depends_on:
- "frontproxy"
volumes_from:
- frontproxy
gitlab:
image: gitlab/gitlab-ce:latest
restart: always
hostname: 'dev.redacted.com'
environment:
VIRTUAL_HOST: 'dev.redacted.com'
LETSENCRYPT_HOST: 'dev.redacted.com'
LETSENCRYPT_EMAIL: 'admin#redacted.com'
VIRTUAL_PROTO: 'https'
VIRTUAL_PORT: '443'
CERT_NAME: 'dev.redacted.com'
GITLAB_SKIP_UNMIGRATED_DATA_CHECK: 'true'
GITLAB_OMNIBUS_CONFIG: |
# Email setup
gitlab_rails['gitlab_email_enabled'] = true
gitlab_rails['gitlab_email_from'] = 'admin#redacted.com'
gitlab_rails['gitlab_email_display_name'] = 'Gitlab#Redacted'
gitlab_rails['gitlab_email_reply_to'] = 'admin#redacted.com'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = 'mail.redacted.com'
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = 'admin#redacted.com'
gitlab_rails['smtp_password'] = 'myfancypassword'
gitlab_rails['smtp_domain'] = 'redacted.com'
gitlab_rails['smtp_authentication'] = 'login'
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['gitlab_root_email'] = 'admin#redacted.com'
# HTTPS Setup
letsencrypt['enable'] = false
external_url 'https://dev.redacted.com'
gitlab_rails['gitlab_https'] = true
gitlab_rails['gitlab_port'] = 443
ports:
- '22:22'
volumes:
- ./config:/etc/gitlab
- ./logs:/var/log/gitlab
- ./data:/var/opt/gitlab
volumes:
certs-volume:
I've been experiencing the same issue.
When I use the GITLAB_OMNIBUS_CONFIG environment variable, these settings do not appear to apply. If I copy just one of the settings that is easily identifiable into the gitlab.rb configuration, it applies just fine.
This is the environment variable as it is present in the container:
GITLAB_OMNIBUS_CONFIG="external_url
'https://dev.foo.com';nginx['redirect_http_to_https'] =
true;gitlab_rails['gitlab_https'] =
true;gitlab_rails['gitlab_email_enabled'] =
true;gitlab_rails['gitlab_email_from'] =
'dev#foo.com';gitlab_rails['gitlab_email_display_name'] =
'DEV-GitLab';gitlab_rails['gitlab_email_reply_to'] =
'dev#foo.com';gitlab_rails['gitlab_email_subject_suffix'] =
'DEV-GIT';gitlab_rails['backup_keep_time'] =
172800;gitlab_rails['gitlab_shell_ssh_port'] = 9999;"
Yet, if I add the SSH port option to the gitlab.rb and reconfigure, I will see it in the clone address. So, while I am not using the composition method, I am launching the container with 'podman run' and passing options like those described in the docker guide for gitlab.

ECONNREFUSED at TCPConnectWrap.afterConnect NodeJS

i created a nodejs app which should use a URI to connect to rabbitmq. both are containerized with docker and are created by a docker-compose file. after running of "docker-compose up" the nodejs app returns an error:
Error: connect ECONNREFUSED X.X.X:X:5672
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.26.0.4',
port: 5672
}
when starting the api-server locally (not as a container-> as an node application), the connection to the containerized rabbitmq server estabilish without any problems.
my rabbitmq.conf file looks like:
default_vhost = /
default_user = guest
default_pass = guest
default_user_tags.administrator = true
default_permissions.configure = .*
default_permissions.read = .*
default_permissions.write = .*
loopback_users = none
listeners.tcp.default = 5672
management.listener.port = 15672
management.listener.ssl = false
management.load_definitions = /etc/rabbitmq/definitions.json
URI for connecting:
{
"mongoURI":"mongodb://mongo:27017",
"amqpURI": "amqp://guest:guest#rabbitmq:5672"
}
as you can see, the hostname is equal to the one, which is within the docker-compose file
finally the docker-compose file:
version: "3.8"
services:
react-app:
image: react-app
stdin_open: true
ports:
- "3000:3000"
networks:
- mern-app
api-server:
image: api-server
ports:
- "5000:5000"
networks:
- mern-app
depends_on:
- mongo
- rabbitmq
process-schedular:
image: process-schedular
ports:
- "5005:5005"
networks:
- mern-app
depends_on:
- mongo
- rabbitmq
mongo:
image: mongo:3.6.19-xenial
ports:
- "27017:27017"
networks:
- mern-app
volumes:
- mongo-data:/data/db
rabbitmq:
image: rabbitmq:3-management
hostname: rabbitmq
volumes:
- ./server/amqp/docker/enabled_plugins:/etc/rabbitmq/enabled_plugins
- ./server/amqp/docker/definitions.json:/etc/rabbitmq/definitions.json
- ./server/amqp/docker/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
ports:
- "5672:5672"
- "15672:15672"
networks:
- mern-app
networks:
mern-app:
driver: bridge
volumes:
mongo-data:
driver: local

elasticsearch check indices exist returns true on first run

I have a docker-compose running 2 containers, each with its own service, node and elasticsearch.
app.js
...
const isElasticReady = await elastic.checkConnection();
if (isElasticReady) {
const elasticIndex = await elastic.esclient.indices.exists({index:elastic.index});
if (!elasticIndex.body) {
await elastic.createIndex(elastic.index);
await elastic.setMapping();
await data.populateDatabase();
}
}
...
Whenever I run docker-compose up, esclient.indices.exists always returns false, even though the index already exists. As a result I always get thrown a resource_already_exists_exception.
The strange thing is that I am using nodemon for development, and whenever I make changes while development esclient.indices.exists will return true. So the problem only happens when I run docker-compose up. I suspect something is happening asynchronously, but I am not sure what.
*docker-compose.yml - depends_on has been set.
version: '3.6'
services:
api:
image: nodeservice/node:10.15.3-alpine
container_name: nodeservice
build: .
ports:
- 3000:3000
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- NODE_PORT=3000
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
elasticsearch:
container_name:my_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- esnet
volumes:
esdata:
networks:
esnet:
Any hints?

Fix DNS on a docker-compose selenium grid so the selenium node connects to a docker-compose hostname

I have a selenium grid running under docker-compose on a Jenkins machine. My docker-compose includes a simple web server that serves up a single page application, and a test-runner container that orchestrates tests.
version: "3"
services:
hub:
image: selenium/hub
networks:
- selenium
privileged: true
restart: unless-stopped
container_name: hub
ports:
- "4444:4444"
environment:
- SE_OPTS=-browserTimeout 10 -timeout 20
chrome:
image: selenium/node-chrome-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5900:5900"
firefox:
image: selenium/node-firefox-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5901:5900"
runner:
build:
context: ./
dockerfile: ./python.dockerfile
security_opt:
- seccomp=unconfined
cap_add:
- SYS_PTRACE
command: sleep infinity
networks:
- selenium
volumes:
- ./:/app
depends_on:
- hub
- app
- chrome
- firefox
environment:
HUB_CONNECTION_STRING: http://hub:4444/wd/hub
TEST_DOMAIN: "app"
app:
image: nginx:alpine
networks:
- selenium
volumes:
- ../dist:/usr/share/nginx/html
ports:
- "8081:80"
networks:
selenium:
When my tests run (in the runner container above) I can load the home page as long as I use an ip address -
def test_home_page_loads(self):
host = socket.gethostbyname(self.test_domain) // this is the TEST_DOMAIN env var above
self.driver.get(f"http://{host}")
header = WebDriverWait(self.driver, 40).until(
EC.presence_of_element_located((By.ID, 'welcome-message')))
assert(self.driver.title == "My Page Title")
assert(header.text == "My Header")
But I can't use the host name app. The following times out -
def test_home_page_with_hostname(self):
self.driver.get("http://app/")
email = WebDriverWait(self.driver, 10).until(
EC.presence_of_element_located((By.ID, 'email')))
The problem I'm facing is that I can't do all this using IP addresses because the web app is connecting to an external IP and I need to configure the API for CORS requests.
I'd assumed the problem was that the chrome container couldn't reach the app container - the issue was that the web server on the app container wasn't serving pages for the hostname I was using. Updating the Nginx conf to include the correct server has fixed the issue.
I can now add the hostname to the access-control-allow-origin settings on the api's that the webpage is using.
I'm attaching a basic working config here for anyone else looking to do something similar.
docker-compose.yml
version: "3"
services:
hub:
image: selenium/hub
networks:
- selenium
privileged: true
restart: unless-stopped
container_name: hub
ports:
- "4444:4444"
environment:
- SE_OPTS=-browserTimeout 10 -timeout 20
chrome:
image: selenium/node-chrome-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5900:5900"
firefox:
image: selenium/node-firefox-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5901:5900"
runner:
build:
context: ./
dockerfile: ./python.dockerfile
security_opt:
- seccomp=unconfined
cap_add:
- SYS_PTRACE
command: sleep infinity
networks:
- selenium
volumes:
- ./:/app
depends_on:
- hub
- webserver
- chrome
- firefox
environment:
HUB_CONNECTION_STRING: http://hub:4444/wd/hub
TEST_DOMAIN: "webserver"
webserver:
image: nginx:alpine
networks:
- selenium
volumes:
- ../dist:/usr/share/nginx/html
- ./nginx_conf:/etc/nginx/conf.d
ports:
- "8081:80"
networks:
selenium:
default.conf
server {
listen 80;
server_name webserver;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The 'runner' container is based on the docker image from python:3 and includes pytest. A simple working test looks like -
test.py
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import os
import pytest
import socket
#Fixture for Chrome
#pytest.fixture(scope="class")
def chrome_driver_init(request):
hub_connection_string = os.getenv('HUB_CONNECTION_STRING')
test_domain = os.getenv('TEST_DOMAIN')
chrome_driver = webdriver.Remote(
command_executor=hub_connection_string,
desired_capabilities={
'browserName': 'chrome',
'version': '',
"chrome.switches": ["disable-web-security"],
'platform': 'ANY'})
request.cls.driver = chrome_driver
request.cls.test_domain = test_domain
yield
chrome_driver.close()
#pytest.mark.usefixtures("chrome_driver_init")
class Basic_Chrome_Test:
driver = None
test_domain = None
pass
class Test_Atlas(Basic_Chrome_Test):
def test_home_page_loads(self):
self.driver.get(f"http://{self.test_domain}")
header = WebDriverWait(self.driver, 40).until(
EC.presence_of_element_located((By.ID, 'welcome-message')))
assert(self.driver.title == "My Page Title")
assert(header.text == "My Header")
This can be run with something like docker exec -it $(docker-compose ps -q runner) pytest test.py (exec into the runner container and run the tests using pytest).
This framework can then be added to a Jenkins step -
Jenkinsfile
stage('Run Functional Tests') {
steps {
echo 'Running Selenium Grid'
dir("${env.WORKSPACE}/functional_testing") {
sh "/usr/local/bin/docker-compose -f ${env.WORKSPACE}/functional_testing/docker-compose.yml -p ${currentBuild.displayName} run runner ./wait-for-webserver.sh pytest tests/atlas_test.py"
}
}
}
wait-for-webserver.sh
#!/bin/bash
# wait-for-webserver.sh
set -e
cmd="$#"
while ! curl -sSL "http://hub:4444/wd/hub/status" 2>&1 \
| jq -r '.value.ready' 2>&1 | grep "true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://webserver)" != "200" ]]; do
echo 'Waiting for Webserver'
sleep 1;
done
>&2 echo "Grid & Webserver are ready - executing tests"
exec $cmd
Hope this is useful for someone.

Resources