Gerrit Change Canonical URL - .htaccess

I have set up gerrit on my subdomain at gerrit.mydomain.com. By default gerrit is running on port 8080 so i have changed the port in gerrit.config [httpd] section to 80 so now gerrit.mydomain.com open gerrit home page.
Now when i print canonical url by running following command:
git config -f ~/gerrit_folder/etc/gerrit.config gerrit.canonicalWebUrl
It still shows url as follows:
http://localhost:8080/
And its the problem now when i sign in by openID it returns to my domain as gerrit.mydomain.com:8080 and nothing happens because there is no server there
Can you please tell me how can i fix this so that it redirects to gerrit.mydomain.com and canonical url will be changed to http://localhost:80?

The gerrit.canonicalWebUrl is not related to the httpd.port configuration. This makes sense if you use a proxy server (such as nginx or apache) where you forward port 80 or 443 (webserver) to port 8080 (gerrit)
You have to edit your gerrit.config and adjust the canonicalWebUrl line to the hostname it should be.
You should able to run git config -f ~/gerrit_folder/etc/gerrit.config --add gerrit.canonicalWebUrl "http://gerrit.mydomain.com/"
I also highly recommend using a reverse proxy with ssl.

Related

Hostname configuration Keycloak 19.0.1

I'm running keycloak version 19.0.1 with no proxy and I want to set a hostname (mykeycloak) as a frontend url to my realm. 
I run the keycloak with the command below:
.\kc.bat start-dev --hostname mykeycloak --proxy edge 
The configuration endpoint (http://localhost:8080/realms/master/.well-known/uma2-configuration) shows :
{"issuer":"http://mykeycloak/realms/master","authorization_endpoint":"http://mykeycloak/realms/master/protocol/openid-connect/auth",...
I also add the below record to my etc/hosts:
127.0.0.1 mykeycloak
Finally when I want to open the mentioned endpoint (http://mykeycloak/realms/master)  I face This site can’t be reached- ERR_CONNECTION_REFUSED
Am I missing something?
At last, this configuration won't work in a public network unless everyone adds the mykeycloak record to their dns.
So what would be the solution that way?
It looks like you are simply missing the port, i.e. it should be http://mykeycloak:8080/realms/master.
If you want to have keycloak accessible on default ports, e.g. 80 or 443, you either need a proxy running on those ports, forwarding to keycloak on 8080 and 8443, or you run keycloak itself on those ports (but that's a bad idea for security reasons).

How to configure port forwarding in gitlab?

I have configured gitlab so that I can only connect to it from a specific ip address. In gitlab.rb file I configured the url this way:
external_url 'gitlab.example.pl:2000'
and also configured ufw:
[ 1] 2000 ALLOW IN 192.169.0.1/24
When I want to access gitlab by browser I have to type additionally port 2000, so I would like to port forwarding to 443. I can't give access to port 443 only to a specific ip address in ufw because i configured mattermost in this same server and must be access from everywhere. I tried port forwarding with apache2 or ngnix but gitlab listens on port 80 and because of this apache2 and nginx are not working. I also tried find solution in file gitlab.rb
nginx['listen_port'] = 443
nginx['redirect_http_to_https_port'] = 80
nginx['redirect_http_to_https'] = false
Please give me a solution to this problem.
You do you not have to configure listeners for gitlab and mattermost separately. Both your mattermost and gitlab URL will point to the same IP address and port and both should route to NGINX.
NGINX will route traffic appropriately to gitlab or mattermost based on the hostname header. Just configure the external_url for gitlab and mattermost_external_url for mattermost appropriately within the same gitlab installation. There's no particular need to put apache in front of gitlab's nginx.
external_url 'https://gitlab.example.com'
mattermost_external_url 'https://mattermost.example.com'
nginx['listen_port'] = 443
nginx['listen_https'] = true
As long as your firewall allows traffic on port 443 to nginx, you're OK. If you need that to be a specific IP address, set nginx['listen_address'].

How to setup Caddy to get HTTPS on my server

I've been issues to get the HTTPS address for my server. Let's say I have a domain www.mydomain.com
If I run this command it just works fine. I can get the HTTPS.
caddy -host www.domain.com
But I have some proxies that I use for django. So I have a CaddyFile. This is how the CaddyFile is set:
# Django
www.mydomain.com {
root /root/my_projects/my_project
proxy / 127.0.0.1:8000 {
transparent
except /static
}
log /var/log/caddy.log
So if I run this command
caddy -host CaddyFile
, it's not giving me HTTPS. Instead this is what the output is:
Activating privacy features... done.
Serving HTTP on port 2015
http://.:2015/caddyfile
So how should I configure the file or what command should I use to get HTTPS on my server with the proxy and the root folder that I set in the CaddyFile?
Thanks.
I'm guessing you use caddy v1.
From the caddy docs said:
-host
The default hostname or IP address to listen on. Sites defined in the Caddyfile without a hostname will assume this one. This is usually used with -port to quickly get simple sites up and running without a Caddyfile.
The -host option maybe ignored your Caddyfile.
If your Caddyfile is in the same directory with caddy binary, try remove all args, just run caddy. It will automatically picks up the Caddyfile.
Otherwise, try this caddy -conf <path/to/your/Caddyfile>

Where do I put my Node JS app so it is accessible via the main website?

I've recently installed a nodejs app (keystone) app in my home/myusername/myappname directory.
When I visit www.mydomain.com, nothing displays - even after turning on my nodejs app.
Where should these files be?
I am running ubuntu 16.04.
In the past I have worked with a var/www folder, but I am not using apache - do I need to manually create this folder?
Thanks!
For your app to be visible it has to be running (obviously) and accessible on port 80 (if you want it to be available without adding a port number to the URL).
It doesn't matter where it is on the disk as long as it's running.
You don't need Apache or nginx or any other server. Your Node app may listen on port 80. But alternatively it can listen on some other port and your other server (Apache, nginx, etc.) can proxy the requests to that port.
But if your app is listening on, e.g. port 3000 then you should be able to access it as http://www.example.com:3000/.
Also, make sure that your domain is configured correctly. It's A record for IPv4 (or AAAA for IPv6) of the www subdomain should be equal to the publicly accessible IP address of your server.
And make sure that the port you use is not blocked by the firewall.
Update
To see how you can set the port with Keystone, see:
http://keystonejs.com/docs/configuration/#options-server
It can be either changed in the config or you can run your app with:
PORT=80 node yourApp.js
instead of:
node yourApp.js
but keep in mind that to use the port number below 1024 you will usually need the program to run as root (or add a special privilege which is more complicated).
It will also mean that this will be the only application that you can run on this server, even if you have more domain names.
If you don't want to run as root or you want to host more application, it is easiest to install nginx and proxy the requests. Such a configuration is called a "reverse proxy" - it's good to search for info and tutorials using that phrase.
The simplest nginx config would be something like this:
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://localhost:3000;
}
}
You can set it in:
/etc/nginx/sites-available/default
or in a different file as e.g.:
/etc/nginx/sites-available/example
and then symlinked as /etc/nginx/sites-enabled/example
You need to restart nginx after changing the config.
You can find more options on configuring reverse proxies here:
https://www.nginx.com/resources/admin-guide/reverse-proxy/
You need to make a proxy between Apache and your Node.js application because Node.js has a built-in server. Supose your Node.js app is served on 9000 port. Then you need to make a proxy to redirect all trafic in 80 port to 9000 port where the Node.js app is running.
1. Enable mod_proxy
You can do this through a2enmond.
sudo a2enmod proxy
sudo a2enmod proxy_http
2. Set the proxy
Edit the /etc/apache2/sites-available/example.com.conf file and add the following lines:
ProxyRequests Off
Order deny, allow from All
ProxyPass / http://0.0.0.0:9000 ProxyPassReverse / http://0.0.0.0:9000
This basically say: "Redirect all traffic from root / to http://0.0.0.0:9000. The host 0.0.0.0:9000 is where your app is running.
Finally restart apache to enable changes.

Assigning a domain name to localhost for development environment

I am building a website and would not like to reconfigure the website from pointing to http://127.0.0.1 to http://www.example.com. Furthermore, the certificate that I am using is of course made with the proper domain name of www.example.com but my test environment makes calls to 127.0.0.1 which makes the security not work properly.
What I currently want to do is configure my development environment to assign the domain name www.example.com to 127.0.0.1 so that all http://www.example.com/xyz is routed to http://127.0.0.1:8000/xyz and https://www.example.com/xyz is routed to https://127.0.0.1:8080/xyz.
I am not using Apache. I am currently using node.js as my web server and my development environment is in Mac OS X Lion.
If you edit your etc/hosts file you can assign an arbitrary host name to be set to 127.0.0.1.
Open up /etc/hosts in your favorite text editor and add this line:
127.0.0.1 www.example.com
Unsure of how to avoid specifying the port in the HTTP requests you make to example.com, but if you must avoid specifying that at the request level, you could run nodejs as root to make it listen on port 80.
Edit: After editing /etc/hosts, you may already have the DNS request for that domain cached. You can clear the cached entry by running this on the command line.
dscacheutil -flushcache

Resources