Restrict Gitlab access by IP along with Nginx - security

I got a public Gitlab installation running on Nginx and working, and i'd like to restrict its access to a whitelist of IP adresses.
I've tried to add a basic restriction in nginx like this :
location #gitlab {
allow 127.0.0.1;
allow XXX.XXX.XXX.XXX;
deny all;
...
}
It kinda works as only allowed IPs can get through gitlab's web interface.
But when it comes to push stuff from these allowed IPs, i got this error :
Pushing to http://my.server:port/myrepo.git
POST git-receive-pack (451 bytes)
remote: GitLab: API is not accesible
To http://my.server:port/myrepo.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'http://my.server:port/myrepo.git'
Weird. I also tried using ngx_http_geo_module, with the same result.
Can someone know how to get this done ?
Thanks

Ok, after looking at gitlab_error.log, i figured out that i also had to whitelist my server's public IP too :
[error] 13629#0: *380 access forbidden by rule, client: YYY.YYY.YYY.YYY, server: my.server, request: "POST //api/v3/internal/allowed HTTP/1.1", host: "my.server:port"
So in the end, my nginx config looks like this, and everything's now fine :
location #gitlab {
allow 127.0.0.1;
allow XXX.XXX.XXX.XXX;
allow YYY.YYY.YYY.YYY;
deny all;
...
}
Simple as that ...
Don't know if it's a bug or something as i'm using an old 7.6 version of gitlab. I'll run an update soon and will check this out.

Related

Gitlab Nginx Omnibus additional virtualhost configuration is ignored

I'm trying to add a new virtualhost to host a separate website on my Gitlab on-prem version.
I have followed the steps in https://stackoverflow.com/a/39695791.
I'm trying to set up a reverse-proxy, added the custom_nginx_config direction got /etc/gitlab/gitlab.rb. it looks like the configuration is loaded because when I had a syntax error I could see the syntax issues via gitlabctl-tail.
I've fixed the syntax issue and now when I'm reconfiguring/restarting nginx and gitlab-ctl it does not seem to accept the new config.
the new configuration is as follows:
server {
listen *:80;
server_name domain.com;
location / {
proxy_pass http://localhost:4444
}
}
and then when I try to access domain.com I'm getting redirected to the gitlab app and not to my internal app

control gitlab traffic using haproxy

We have a gitlab instance that is running in a private subnet in AWS. For some of our projects we need to be able to clone them and execute some pull commands from outside our network. We want to control the acces to this repositories through haproxy and give restricted acces to them. We are cloning them through https, so we do not need ssh trafic forward for them. The problem is that i have setup the rules to fwd the request for a specific repository to gitlab but every time i try to login i get :
remote: HTTP Basic: Access denied
fatal: Authentication failed for ...
The rule is something simple like :
use_backend gitlab if { path_beg /path1/path2/repo.git }
Our backend definition looks like :
backend gitlab
mode http
server gitlab git.internal.server:80
Anyone managed to this thing using haproxy ?

Caddy V2 IP whitelist

I am trying to implement IP whitelist on my Caddy v2 configuration. Something equivalent to NGINX configuration like:
allow 1.1.1.1;
allow 8.8.8.8;
deny all;
My current Caddy configuration pretty straight forward:
my.website.com {
reverse_proxy http://127.0.0.1:3000 {
}
}
Thanks
You can try something like this in caddy v2:
my.domain.com {
#teammember {
remote_ip forwarded 183.77.5.126 113.73.5.126
}
handle #teammember {
reverse_proxy /* localhost:8081
}
respond "<h1>You are attempting to access protected resources!</h1>" 403
}
I'm not saying qed's answer is wrong, however I couldn't get it to work in my case (possibly due to using import templates inside a handle?)...
My solution was... Old config:
private.example.com {
import my_template argument_1 /path/to/example/argument2
}
This changed to:
private.example.com {
#blocked not remote_ip 1.2.3.4
respond #blocked "<h1>Access Denied</h1>" 403
import my_template argument_1 /path/to/example/argument2
}
Simply adding those two lines allows my site to be accessed on that IP. A test curl from a different IP returned the 403 error.
This is done on Caddy 2.4.6
I am not sure it is possible directly in Caddy, but you can add a middleware/plugin to do this.
Here is the link you can get it : https://github.com/pyed/ipfilter
According to the doc of this middleware, to you want to allow only the 2 IPs you wrote, you should probably do something like this :
my.website.com {
reverse_proxy http://127.0.0.1:3000
ipfilter / {
rule allow
ip 1.1.1.1 8.8.8.8
blockpage notauthorized.html
}
}
I also think if want to block every requests, not just the /, you have to write ipfilter /* instead of ipfilter /.

DNS resolve timeout/delay for domains mapped to localhost in hosts file

I'm actually facing an issue which came up when using the proxy in Angular CLI.
But it's not related directly to Angular nor to node.js... it seems to have it's roots some levels deeper (namely on operating system level)
##Short version:
When I have a domain to IP mapping in my hosts file /etc/hosts and proxy it using node-http-proxy which is the underlying layer of the angular-cli proxy feature there's a delay of 5000ms before the request gets resolved and the response is provided.
Proxying is mandatory for backend communication to avoid cross origin errors in development because angular apps are served via port 4200.
##Longer version:
Operating System: OSX Catalina 10.15.4
Based on a deeper analysis it's not caused by Angular CLI and even not node.js.
It seems there's something going "wrong" with the system as I can reproduce the behavior in my terminal as well using the arp command
There's a mapping in the /etc/hosts file which looks like below:
127.0.0.1 service.company.local
When running then the command: arp service.company.local it won't resolve of course as this domain isn't known for DNS servers.
It finishes with the output: arp: service.company.local: Unknown host
Also when the computer is disconnected from internet/network (wifi of) the arp still takes 5000ms before it finishes with the Unknown host message, whereas it directly returns Unknown host for existing domains (then without delay).
The problem is pretty frustrating as it heavily slows down local development of an Angular app which is doing some cascading requests take so extremely long that a fluent work isn't possible.
Screenshot from Chrome Dev Tools:
Is there some known solution to get around this issue without moving away from the domain to ip mapping within the hosts file?
Addition (content of the hosts file)
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 service.company.local
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
I'm very thankful for any hints.

CUPS bad request

I have a little problem with CUPS 2.2.7
This is my /etc/hosts file:
127.0.0.1 example.com
127.0.0.1 localhost
in http://localhost:631/ CUPS is working right
but in http://example.com:631/ it doesn't work on the same pc.
The message error in View error log is that one:
E [21/Feb/2019:11:54:18 +0100] [Client 33] Request from "localhost" using invalid Host: field "example.com:631".
The web page on Firefox print an error message Invalid request and give me an Error (error code: 400) but seems point on CUPS.
How to solve this problem so that example.com:631 points to localhost and CUPS answers it successfully instead of Error 400: Access Denied.
By default cups servers HTTP requests only with HTTP Host header equal to "localhost". To allow it servicing requests for additional HTTP host headers use ServerAlias directive as described in the man cupsd.conf documentation. It's common to do the most unsafe thing and add
ServerAlias *
to /etc/cupsd.conf to allow all possible HTTP host headers to be serviced.
I know this is old, but I too was experiencing the same issue recently and I resolved it by updating the following line in cupsd.conf from:
Listen 0.0.0.0:631
changed to:
Listen *:631
For those that maybe care to know, I'm running CUPS within a docker container, and this change corrects the "Bad Request" response.

Resources