Hostname configuration Keycloak 19.0.1 - dns

I'm running keycloak version 19.0.1 with no proxy and I want to set a hostname (mykeycloak) as a frontend url to my realm. 
I run the keycloak with the command below:
.\kc.bat start-dev --hostname mykeycloak --proxy edge 
The configuration endpoint (http://localhost:8080/realms/master/.well-known/uma2-configuration) shows :
{"issuer":"http://mykeycloak/realms/master","authorization_endpoint":"http://mykeycloak/realms/master/protocol/openid-connect/auth",...
I also add the below record to my etc/hosts:
127.0.0.1 mykeycloak
Finally when I want to open the mentioned endpoint (http://mykeycloak/realms/master)  I face This site can’t be reached- ERR_CONNECTION_REFUSED
Am I missing something?
At last, this configuration won't work in a public network unless everyone adds the mykeycloak record to their dns.
So what would be the solution that way?

It looks like you are simply missing the port, i.e. it should be http://mykeycloak:8080/realms/master.
If you want to have keycloak accessible on default ports, e.g. 80 or 443, you either need a proxy running on those ports, forwarding to keycloak on 8080 and 8443, or you run keycloak itself on those ports (but that's a bad idea for security reasons).

Related

Nextcloud with Traefik - Bad Gateway / Connection Refused

I recently installed Nextcloud over a lamp stack and want to run Traefik in front. For that, I tweaked the apache2 ports.conf to:
Listen: 127.0.0.1:180
. Now I also configured a .toml for Traefik that points to this address.
When I try to open the website, it gives me "Bad Gateway".
Trying to solve the error I searched the Traefik logs and found this:
msg="'502 Bad Gateway' caused by: dial tcp 127.0.0.1:180: connect: connection refused"
Thinking it must be a problem with trusted_proxies I configured Apache to open it's port to the public and also changed the Traefik .toml to see wheter it would work.
It did. That means that Nextcloud definetly accepts my proxy and the proxying works all good.
Problem is, It doesn't work when I configure it on localhost.
The access.log and nextcloud.log show nothing.
Any help?
Many thanks
The solution is simple, but hidden.
Traefik is a Docker container, so normally it can't communicate with services not in the docker network.
The fix is:
ip addr show docker0
Bind Apache2 to this IPv4: (my example) Listen 172.17.0.1:180 and also modify the Traefik Config.
Then Apache2 will listen on the docker0 network which containers have access to.

How do I make a NodeJs project publicly accessible on port 3000?

I have a NodeJs/Express project in Alibaba cloud based Ubuntu server.
When I run project and access with curl localhost:3000 and curl 127.0.0.1:3000 it works!
When I access with IP public, e.g. curl 192.x.x.x:3000 it doesn't work, even though I have edited config in Express project in some code to : server.listen(3000,"0.0.0.0") OR server.listen("3000","192.x.x.x").
FYI I have Apache on this server. When I access on Internet with IP public no problem.
What can I do to solve this problem? Thanks beforehand.
PS: the 192.x.x.x is my IP public and it works access with Apache project
Issue the following command to open port 3000 for TCP traffic.
sudo ufw allow 3000/tcp
You have to configure your security ground and create a inbound rule to allow port 3000. Follow this guideline.
https://www.alibabacloud.com/help/doc-detail/25471.htm
Make sure you allow TCP traffic or all traffic from all sources to the port 3000 as the inbound rule.
The fact that you can access your service locally - but not publicly could mean 2 possible configurations:
The server running your application has blocked the port 3000
You have not configured your server to map the port 80 of a specific route to the port 3000
It is highly possible that a most essential part of your server configuration has not been done.

Amazon Linux cannot access nginx on port 80

I have installed nginx on my AMI by yum
sudo yum install nginx
And then, I open all port in my AMI security group
All traffic - All - All - 0.0.0.0/0
And then, I start nginx by command
sudo service nginx start
And then, I access my nginx web service by http://public-ip
but I cannot access by this way.
I try to check the connection in my server.
ssh my_account#my_ip
And then,
wget http://localhost -O-
And It worked fine.
I cannot figure out what is the root cause, and then I change nginx port from 80 to 8081 and I restart the nginx server.
And then, I try to access again. It worked fine. WTH...
http://public-ip:8081
I don't know exactly what is going on?
Could you tell me what is the problem.
I see a few possibilities:
You are blocking the connections with a firewall on the host.
Security Group rules disallow this access
You are in a VPC and have not set up an Internet Gateway or route to host
Your Nginx configurations are set to explicitly listen on host and port combinations such that it responds to "localhost" but not to the public IP or host name. You could post your Nginx configs and be more specific about how it doesn't work when you try remotely. It is timing out? Not resolving? Receiving an HTTP response but not what you expected?

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

Gerrit Change Canonical URL

I have set up gerrit on my subdomain at gerrit.mydomain.com. By default gerrit is running on port 8080 so i have changed the port in gerrit.config [httpd] section to 80 so now gerrit.mydomain.com open gerrit home page.
Now when i print canonical url by running following command:
git config -f ~/gerrit_folder/etc/gerrit.config gerrit.canonicalWebUrl
It still shows url as follows:
http://localhost:8080/
And its the problem now when i sign in by openID it returns to my domain as gerrit.mydomain.com:8080 and nothing happens because there is no server there
Can you please tell me how can i fix this so that it redirects to gerrit.mydomain.com and canonical url will be changed to http://localhost:80?
The gerrit.canonicalWebUrl is not related to the httpd.port configuration. This makes sense if you use a proxy server (such as nginx or apache) where you forward port 80 or 443 (webserver) to port 8080 (gerrit)
You have to edit your gerrit.config and adjust the canonicalWebUrl line to the hostname it should be.
You should able to run git config -f ~/gerrit_folder/etc/gerrit.config --add gerrit.canonicalWebUrl "http://gerrit.mydomain.com/"
I also highly recommend using a reverse proxy with ssl.

Resources