Changing the default Gitlab port - gitlab

I have installed the latest Gitlab-CE (8.10) on CentOS 7 (fresh install) via the Omnibus package as described here: https://about.gitlab.com/downloads/#centos7
Now, I would like to change the default port at which one can access the Gitlab web interface. To this end, I followed the instructions at http://docs.gitlab.com/omnibus/settings/nginx.html#change-the-default-port-and-the-ssl-certificate-locations, namely I included
external_url "http://127.0.0.1:8765"
in the configuration file /etc/gitlab/gitlab.rb and then updated the configuration with gitlab-ctl reconfigure && gitlab-ctl restart.
However, when I then navigate to http://127.0.0.1:8765, Gitlab keeps redirecting to http://127.0.0.1/users/sign_in, i.e., the port specification is somehow discarded. When I then manually change the URL in the browser to http://127.0.0.1:8765/users/sign_in, it correctly displays the login page and interestingly, all links on the page (e.g., "Explore", "Help") contain the port specification.
In order to fix this behavior, is it necessary to specify the port also somewhere else than in /etc/gitlab/gitlab.rb?

Issue here: https://gitlab.com/gitlab-org/gitlab-ce/issues/20131
Workaround:
add this line to /etc/gitlab/gitlab.rb:
nginx['proxy_set_headers'] = { "X-Forward-Port" => "8080", "Host" => "<hostname>:8080" }
replace port and hostname with your values, then as root or with sudo:
gitlab-ctl reconfigure
gitlab-ctl restart
It helps me on Debian 8.5, gitlab-ce from gitlab repo.

In addition of external_url, the documentation also suggests to set a few NGiNX proxy headers:
By default, when you specify external_url, omnibus-gitlab will set a few NGINX proxy headers that are assumed to be sane in most environments.
For example, omnibus-gitlab will set:
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
(if you have specified https schema in the external_url).
However, if you have a situation where your GitLab is in a more complex setup like behind a reverse proxy, you will need to tweak the proxy headers in order to avoid errors like The change you wanted was rejected or Can't verify CSRF token authenticity Completed 422 Unprocessable.
This can be achieved by overriding the default headers, eg. specify in /etc/gitlab/gitlab.rb:
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "http",
"CUSTOM_HEADER" => "VALUE"
}
Save the file and reconfigure GitLab for the changes to take effect.
This way you can specify any header supported by NGINX you require.
The OP ewcz confirms in the comments:
I just uncommented the default settings for nginx['proxy_set_headers'] in /etc/gitlab/gitlab.rb (also, changing X-Forwarded-Proto to http and removing X-Forwarded-Ssl) and suddenly it works!

Related

Remove port from registry URL in Gitlab UI

We’re running Gitlab with docker-compose and have the registry enabled. On our server where the docker container’s running we’re running a NGINX that proxy_pass https://our-registry.com to the port we have exposed the registry on. That works fine and all.
Our problem is that in the Gitlab UI it’s showing the registry URL as https://our-registry.com:5005.
Using the URL with that port will not work. How can we make the UI not show the port? We have already tried setting the registry_external_url to https://our-registry.com but without success.
Thanks in advance!
Removing the config below from gitlab.rb worked for me.
gitlab_rails['registry_port'] = "5005"

Gitlab self-managed static page not loaded

I faced one issue when I tried to change my default port number for gitlab self-managed version. The screenshot below shows that the static files is not loaded when I changed the port number to 8080, default is 80. Someone please help me with this issue!!! Much appreciated!!!
*Ubuntu server 20.04.3 LTS on Dell Precision Tower 3640.
Assuming an Omnibus installation, it depends on how you did mody the default port.
It should be by Setting the NGINX listen port
nginx['listen_port'] = 8080
And you are supposed to run (possibly as root)
gitlab-ctl reconfigure
gitlab-ctl restart
Make sure, as in here, your external_url remain with HTTP.

Add non-docker service to traefik v2 - site resources missing

Question update below!
I have set up traefik in the last days, it seems to work great for docker containers. What does not work is setting up a non-docker backend. I have a netdata dashboard running (https://github.com/netdata/netdata) on port 19999 on the host.
I have defined a file provider:
[providers.file]
directory = "/home/myname/traefik"
filename = "providers.toml"
watch = true
Where I defined the service and router for my netdata dashboard:
[http.routers]
[http.routers.netdata]
service = "netdata"
middlewares = ["replacepath"]
rule = "Host(`my.host.name`) && Path(`/netdata`)"
[http.middlewares]
[http.middlewares.replacepath.replacePath]
path = "/"
[http.services]
[http.services.netdata]
[http.services.netdata.loadBalancer]
[[http.services.netdata.loadBalancer.servers]]
url = "http://192.168.178.60:19999/" ---> my server local ip
I use replacepath to strip the path so I don't end up one directory further down, which is not existing.
However when I visit http://my.host.name/netdata it serves me only raw html by the looks of it, I get 404s for .css and .js content.
What do I have to do to get all files in the website directory delivered? I feel like there is an easy solution to this which I can't see right now...
I found several tutorials using older traefik versions, where they use frontends and backends, to my understanding these are being replaced by routers, middlewares and services.
I tried using "http://localhost:19999" instead of my local ip, with no success (results in a bad gateway error)
I also tried setting the traefik container to the network "host" because the containers should be isolated from the rest of the host, so traefik cannot communicate with the netdata server, but as I said I get at least part of the website, so this can't be the issue?
Update #1, 30 Jan 20:
After some more tries and a failed attempt to make it work with nginx I realized that not the proxy itself is the problem here. I noticed that whatever service I run at root level (so, not path rules in traefik, or location / in nginx) it works, but everything else which gets a path/location is broken or not working at all. One service I wanted to proxy via a route is a dashboard from my homebridge (https://github.com/nfarina/homebridge) - but it seems like Angular is having troubles with custom paths. Same problem with my netdata dashboard or my onionbox status site. I am leaving this question open, maybe someone finds a (hacky) way of making it work.
You must use "PathPrefix" on router and "replacePathRegex" on middleware.
Try this way... its work for me:
[http]
[http.services]
[http.services.netdata]
[http.services.netdata.loadBalancer]
[[http.services.netdata.loadBalancer.servers]]
url = "http://172.24.0.1:19999"
[http.middlewares]
[http.middlewares.rem_subfolder]
[http.middlewares.rem_subfolder.replacePathRegex]
regex = "/netdata/(.*)"
replacement = "/$1"
[http.routers]
[http.routers.netdata]
rule = "PathPrefix(`/netdata/`)"
entrypoints = [
"web",
"websecure"
]
middlewares = [
"rem_subfolder"
]
service = "netdata"
Run the following command to get your host ip (default route), and set at "url" from service.
docker exec -it traefik ip route
Remember to change bind to = * to bind to = 172.24.0.1 at netdata.conf, to make it accessible only from traefik.

Set external gitlab web port

I have installed the latest version of gitlab. I forgot to set the port in the EXTERNAL_URL variable during installation, so that I now need to change the port number. I have changed the external_url variable in gitlab.rb as mentioned in this question > https://serverfault.com/questions/585528/set-gitlab-external-web-port-number.
Gitlab is now accessible on this new port. However the previous port remains accessible (locally only, it seems) and is not freed (netstat shows that the process launching it is unicorn).
I rebooted the server, to no avail. I also tried changing the unicorn['port'] variable, but it doesn't work either.

Does vagrant share with https still work?

I have the guest's HTTPS port set to 443 on it's Apache 2 installation.
In Vagrantfile
I have vm.forwarded_port set to forward from 443 to 8443
I have vm.hostname set to actualdomain.org
I've also installed the vagrant plugin install vagrant-hostsupdater so that actualdomain.org is written to my hosts file, so it pulls up the developer environment and not the actual site when it is requested.
List item
I ran vagrant connect...
I ran vagrant share --https 443 --domain actualdomain.org but it reports the following:
==> default: Detecting network information for machine...
default: Local machine address: 192.168.xx.10
default: Local HTTPS port: 443
==> default: Checking authentication and authorization...
==> default: Creating Vagrant Share session...
There was an error returned by the Vagrant Cloud server. The
error message is shown below:
Domain cannot be used with this account
But if I run vagrant share without the --domain parameter, I end up with the following in my logs when I try to contact the site remotely:
Hostname XXXXX-YYY-ZZZZ provided via SNI and hostname XXXXX-YYY-ZZZZ.vagrantshare.com provided via HTTP are different
And in the browser I am returned an HTTP 400 Bad Request.
Is there any easy way around this? It seems to me that this didn't happen the last time I used vagrant, and it seems as though there was something added to TLS that causes it to balk about the SNI error since then.
I even tried adding a server alias that was the same as the XXXXX-YYY-ZZZ.vagrantshare.com, and it still is giving me an issue; does that mean that I have to rebuild the certificate everytime the hashicorp URL changes if I want to show it off to somebody via their browser?

Resources