SonarQube Returning Bad Gateway Error - linux

I'm trying to serve SonarQube using Caddy. I'm able to view the site, but it returns 502 Bad Gateway. The service appears to be up and running. Also curling locally is rejected.
curl
curl -I 0.0.0.0:9000
curl: (7) Failed to connect to 0.0.0.0 port 9000: Connection refused
sonar.properties
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
#sonar.web.port=9000
sonar.web.https.port=8999
Caddyfile
https://....com {
tls self_signed
gzip
proxy / 0.0.0.0:9000
}
http://....com {
tls off
gzip
proxy / 127.0.0.1:9000
}

0.0.0.0 is not a routable address. It is used by servers as a "meta-address" to specify that it should listen on all available addresses as opposed to just one. So a server can listen on 0.0.0.0, but a client cannot make requests to 0.0.0.0. Your Caddyfile should look like this:
https://....com {
tls self_signed
gzip
proxy / 127.0.0.1:9000
}
http://....com {
tls off
gzip
proxy / 127.0.0.1:9000
}
And local cURL requests should look like this: curl 127.0.0.1:9000

Related

How do I set up my Linux Azure VM so that I can connect via a browser?

I'm working on a Linux VM on Azure which was set up by someone else (so I don't know all the details). I'm trying to connect it to a domain name.
The server has a "Hello World" program, so when I go to "example.com" I should be seeing "Hello World". Currently I'm just getting
Safari can't open the page "http://example.com" because Safari can't find the server "my domain.com"
I thought I'd start with making sure that the IP address connects to the server (which it did at one point. So I enter the IP address of the server (let's say it's "12.345.678.901") in the browser, and it can't connect... I get the error
Can't open the page "12.345.678.901" because the server where this page is located isn't responding
There's an Inbound port rule to allow connections for port 8080, so I tried "12.345.678.901:8080" but this time got
Can't open the page "12.345.678.901:8080" because Safari can't connect to the server
I don't know what to try next. Presumably something needs to be enabled on the server to allow the browser to connect?
The other inbound port rules are ssh on port 22 (TCP) and then what I assume are the standard Azure ones (I can't edit or delete them anyway).
To view your Linux VM inside the browser, you need to install a web server. Easiest to install and get working straight away is nginx.
First thing you need to do is SSH(port 22) into your VM using the username and IP address of the machine:
ssh username#ipaddress
Which will prompt you to enter a passphrase to gain access to the VM.
This also assumes your SSH public key exists inside ~/.ssh/authorized_keys on the VM. If you don't have this setup then you need to get the owner of the VM to copy your public key into this file. Otherwise you won't be able to connect and get a Permission denied (publickey) error.
Assuming the above works, you can install the nginx webserver with the following two commands:
sudo apt-get -y update
sudo apt-get -y install nginx
Then once this web server is installed, add an HTTP inbound port 80 rule inside the network settings. For security reasons, having your web server listen on this port is probably unsecure long term. Its just easier to get working when you choose this port to begin with, because its the default.
You can see what the default listening port by viewing the server configuration host file with cat /etc/nginx/sites-available/default:
#server {
# listen 80;
# listen [::]:80;
#
# server_name example.com;
#
# root /var/www/example.com;
# index index.html;
#
# location / {
# try_files $uri $uri/ =404;
# }
#}
Which shows the default port of 80. You can change this default port to 8080, then run sudo service nginx restart to restart the server and apply the changes. Additionally, you can have a look at this How to make Nginx Server Listen on Multiple Ports tutorial, which goes into more depth on how to configure listening ports for nginx webservers.
You should then be able to view your VM from a browser window(blurred out my IP address for security reasons):
You can also have a look at this Quickstart: Create a Linux virtual machine in the Azure portal tutorial for a step by step on how to get this setup in Azure.
You should first check to see if you have an entry for http://example.com. The reason could be that you do not have a DNS Entry and when you are trying to connect to it via the browser. Since you tried connecting to it via IP and it still did not work, I would suggest you check your Webserver configurations to make sure it is correctly listening for port 8080. Also, ensure that your webserver is also turned on as well. You can tail the webserver log and try to hit it via the IP like you did earlier and see if you see any errors in the logs. It would at least tell you if your request you are making on your browser is actually getting to the webserver.

Nginx is refusing to connect on AWS EC2

I'm trying to use nginx to setup a simple node.js server, I'm running the server in background on port 4000, my nginx config file is
server {
listen 80;
listen [::]:80;
server_name 52.53.196.173;
location / {
include /etc/nginx/proxy_params;
proxy_pass http://127.0.0.1:4000;
}
}
I saved it in /etc/nginx/sites-available and also symlinked it to sites-enabled, the nginx.conf file has the include line already to load files from sites-enabled, then i restarted the service using
sudo service nginx restart
I tried going to 52.53.196.173 and it refuses to connect, however going to 52.53.196.173:4000 with port 4000 it is working, but I'm trying to make it listen on port 80 with nginx, i tried putting my .ml domain as server_name and no luck, and i have the IP 52.53.196.173 as the A record in the domain dns settings, and I'm doing this on an AWS EC2 Instance Ubuntu Server 16.04, i even tried the full ec2 public dns url no luck, any ideas?
Edit: I solved it by moving the file directly in sites-enabled instead of a symlink
There is few possible things. First of all you need to verify that nginx server is running & listening on port 80. you can check the listening ports using the following command.
netstat -tunlp
Then you need to check your server firewall & also the selinux policies. ( OR disable selinux for test )
Then you need to verify that AWS security group configured to access the http/https connections on port 80.
PS : Outputs from the following command & configurations will be helpful for troubleshooting.
netstat -tunlp
sestatus
iptables -L
* AWS Security Group Rules
* Nginx configurations ( including main configuration if changed )
P.S : OP fixed the problem by moving the config file directly into site-enabled directory. maybe, reefer the comments for more info if you are having the same issue.
Most probably port 80 might not be open in your security group or nginx is not running to accept the connections. Please post the nginx status and check the security group
check belows:
in security group, add Http (80) and Https (443) in inbound section with 0.0.0.0 ip as follow:
for 80 :
for 443 :
in Network ACL, allow inbound on http and https. outbound set custom TCP role as follow:
inbound roles:
outbound roles:
assign a elastic ip on ec2 instance, listen to this ip for public.

HAProxy - LB IP address is not delegated to virtual machines

I am total beginner for HAProxy so please any advice will be much useful.
I have two virtual machines on Microsoft Azure.
They are in virtual network, and they have private IP addresses 10.0.9.4 and 10.0.9.5
I created new Network interface on Microsoft Azure in the same virtual network with IP address 10.0.9.7
Of course this is not delegated to any virtual machines.
Name of interface is : lb.oozie.local, private IP address 10.0.9.7
I added in /etc/hosts on .4 and .5
10.0.9.7 lb.oozie.local
I installed haproxy on both machines 4 and 5.
haconfig file is the following:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
#user haproxy
#group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind lb.oozie.local:80
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server oozie1 10.0.9.4:11000 check
server oozie2 10.0.9.5:11000 check
listen stats lb.oozie.local:1936
stats enable
stats uri /haproxy?stats
I did also:
sudo service haproxy restart
Redirecting to /bin/systemctl restart haproxy.service
Validation returns following:
haproxy -f /etc/haproxy/haproxy.cfg -c
[WARNING] 284/134546 (22658) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 284/134547 (22658) : Server nodes/oozie2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 284/134547 (22658) : sendto logger #1 failed: No such file or directory (errno=2)
[ALERT] 284/134547 (22658) : sendto logger #2 failed: No such file or directory (errno=2)
As I understood my servers should get the LB IP address (10.0.9.7).
I try from 10.0.9.4 and 10.0.9.5 ping to 10.0.9.7
but on both servers I am getting it is not recognized.
ping 10.0.9.7
PING 10.0.9.7 (10.0.9.7) 56(84) bytes of data.
From 10.0.9.4 icmp_seq=1 Destination Host Unreachable
From 10.0.9.4 icmp_seq=2 Destination Host Unreachable
Also if it is relevant:
i installed keepalived mechanism
I did not set public IP address for Load Balancer address, it has only private IP 10.0.9.7, because service is invoked directly from servers 10.0.9.4 and 10.0.9.5
please help.
Thank you in advance,
If you want to use Load Balancer in front of VM's with HA Proxy to create a fault tolerant pair of HA Proxies , you need to create an internal Load Balancer with the frontend IP of 10.0.9.7 (rather than assign 10.0.9.7 to a NIC). It is not possible to ICMP ping the frontend IP of a Load Balancer frontend, you need to use TCP ping instead. Make sure health probes are configured and see a signal from your HA Proxy VM's directly rather than the port HA Proxy is offering up to clients (the result is probably not what you want). Familiarize yourself with Standard Load Balancer at https://aka.ms/lbstandard and take not that an NSG must whitelist ports used with a Standard LB.

Run Node.js & Meteor behind SOCKS proxy

I am connecting to the internet in country where many sites blocked. So the method of connection is:
ssh -D 3030 root#46.101.111.333
then I configured in the Network Preferences
this way I able to connect anywhere using my browser. No problem. But when I want to install NPM modules or Meteor.js plugins with Terminal I get an error.
in NPM:
errno: 'ECONNREFUSED' If you are behind a proxy, please make sure that the 'proxy' config is set properly. See: 'npm help config'
in METEOR:
Unable to update package catalog (are you offline?)
If you are using Meteor behind a proxy, set HTTP_PROXY and HTTPS_PROXY
environment variables or see this page for more details:
https://github.com/meteor/meteor/wiki/Using-Meteor-behind-a-proxy
I followed both Meteor & NPM documentations.
Meteor
export HTTP_PROXY=http://root:password#46.101.111.333:3030
export HTTPS_PROXY=http://root:password#46.101.111.333:3030
meteor update
NPM
npm config set proxy http://root:password#46.101.111.333:3030
npm config set https-proxy http://root:password#46.101.111.333:3030
and some others...
Please help, what do I need to do else.. Is it ssh or proxy specific issue. Are my settings correct ?
Suppose your SOCKS5 proxy is: 127.0.0.1:3030 ...
Install proxychains-ng by homebrew
Create a ~/.proxychains/proxychains.conf
for example, you may need to add one line:
socks5 127.0.0.1 3030
following [ProxyList]:
# proxychains.conf VER 4
#
# HTTP, SOCKS4, SOCKS5 tunneling proxifier with DNS.
#
# The option below identifies how the ProxyList is treated.
# only one option should be uncommented at time,
# otherwise the last appearing option will be accepted
#
#dynamic_chain
#
# Dynamic - Each connection will be done via chained proxies
# all proxies chained in the order as they appear in the list
# at least one proxy must be online to play in chain
# (dead proxies are skipped)
# otherwise EINTR is returned to the app
#
strict_chain
#
# Strict - Each connection will be done via chained proxies
# all proxies chained in the order as they appear in the list
# all proxies must be online to play in chain
# otherwise EINTR is returned to the app
#
#random_chain
#
# Random - Each connection will be done via random proxy
# (or proxy chain, see chain_len) from the list.
# this option is good to test your IDS :)
# Make sense only if random_chain
#chain_len = 2
# Quiet mode (no output from library)
#quiet_mode
# Proxy DNS requests - no leak for DNS data
proxy_dns
# set the class A subnet number to usefor use of the internal remote DNS mapping
# we use the reserved 224.x.x.x range by default,
# if the proxified app does a DNS request, we will return an IP from that range.
# on further accesses to this ip we will send the saved DNS name to the proxy.
# in case some control-freak app checks the returned ip, and denies to
# connect, you can use another subnet, e.g. 10.x.x.x or 127.x.x.x.
# of course you should make sure that the proxified app does not need
# *real* access to this subnet.
# i.e. dont use the same subnet then in the localnet section
#remote_dns_subnet 127
#remote_dns_subnet 10
remote_dns_subnet 224
# Some timeouts in milliseconds
tcp_read_time_out 15000
tcp_connect_time_out 8000
# By default enable localnet for loopback address ranges
# RFC5735 Loopback address range
localnet 127.0.0.0/255.0.0.0
# RFC1918 Private Address Ranges
# localnet 10.0.0.0/255.0.0.0
# localnet 172.16.0.0/255.240.0.0
# localnet 192.168.0.0/255.255.0.0
# Example for localnet exclusion
## Exclude connections to 192.168.1.0/24 with port 80
# localnet 192.168.1.0:80/255.255.255.0
## Exclude connections to 192.168.100.0/24
# localnet 192.168.100.0/255.255.255.0
## Exclude connections to ANYwhere with port 80
# localnet 0.0.0.0:80/0.0.0.0
# ProxyList format
# type host port [user pass]
# (values separated by 'tab' or 'blank')
#
#
# Examples:
#
# socks5 192.168.67.78 1080 lamer secret
# http 192.168.89.3 8080 justu hidden
# socks4 192.168.1.49 1080
# http 192.168.39.93 8080
#
#
# proxy types: http, socks4, socks5
# ( auth types supported: "basic"-http "user/pass"-socks )
#
[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
socks5 127.0.0.1 3030
then run the meteor by adding proxychains4 in front, e.g.:
proxychains4 meteor add angularui:angular-ui-router

HAProxy error on OpenShift - Failed to execute: 'control restart'

I am trying to configure HAProxy on OpenShift to achieve following URL based routing.
when I am trying to restart my app, I am getting following error in HAProxy log
Starting frontend http-in: cannot bind socket
Following are the changes I made to haproxy.cfg, in addition I have also added "user nobody" to global section. What am I doing wrong? I am new to HAProxy, so I believe it might be very basic thing I am missing.
frontend http-in
bind :80
acl is_blog url_beg /blog
use_backend blog_gear if is_blog
default_backend website_gear
backend blog_gear
mode http
balance roundrobin
option httpchk
option forwardfor
server WEB1 nodejs-realspace.rhcloud.com weight 1 maxconn 512 check
backend website_gear
mode http
balance roundrobin
option httpchk
option forwardfor
server WEB2 website-realspace.rhcloud.com weight 1 maxconn 512 check
To note a few problems with your configuration.
The first problem in your configuration is that you should listen on port 8080.
Ports 80, 443, 8000 an 8443 on the outside will be redirected to port 8080 on your gear.
Second website-realspace.rhcloud.com is probably the external name of your gear that also hosts your HAProxy. This means that you have created a loop.
To acces your nodejs app you'll need to use the 127.a.b.c address assigned to your gear.
Also your nodejs app should most likely cannot listen on the same port as your HAProxy.

Resources