Nginx access logs entries don't get created for some connections when they happen - node.js

I have a website architecture as follows:
internet --> loadbalancer --> webserver/api
So there is an nginx on the load balancer machine setup as a load balancer and there is also an nginx on the webserver/api node functioning as a reverse proxy. The webserver server receives requests from browsers (via the load balancer), accesses the api over HTTP and renders the page to the browser. The webserver and api are both nodejs apps.
The nginx load balancer has log entries for the webserver-->api connections, but it doesn't log the initial client browser-->webserver connections until the browser is closed (tested with Chrome and Firefox). It's as though the connection is kept in an unfinished state until the browser is fully shutdown, at which point the log entry is written.
nginx load balancer access logs:
110.110.110.101 - - [21/Feb/2019:22:21:23 +0000] loadbalancer01 TCP 200 186833 825 0.047 upstream: 10.0.0.100:443
110.110.110.100 - - [21/Feb/2019:22:21:37 +0000] loadbalancer01 TCP 200 24327 3856 21.991 upstream: 10.0.0.100:443 <-- only created after browser is closed
110.110.110.100 - ip of client connecting with Chrome/Firefox
110.110.110.101 - webserver/api node public interface
10.0.0.100 - webserver/api node private interface
The webserver->api connection is logged first even though it clearly happens second, and the client browser->webserver connection only gets logged when the client browser is completely closed.
Is there some sort of buffering happening? I'm not using the buffer parameter in the stream block logging configuration:
log_format combined '$remote_addr - - [$time_local] $hostname $protocol $status $bytes_sent $bytes_received $session_time upstream: $upstream_addr';
access_log /var/log/nginx/access.log combined;
Why does the connection only get logged when the browser is closed? How can I ensure that the initial connection is logged when the connection happens?
[update - added log configuration, also note that ips have been redacted]

I figured this out by comparing the headers between a browser connection to the load balancer compared to a connection initiated from a script. Turns out the browsers set "Connection: keep-alive" header which keeps the connection open so multiple requests can be sent using the same connection.
Useful commands to run this on the load balancer public ip to see the connection headers:
sudo tcpdump -nn -A -s1500 -l -i eth0 port 80
The other thing to note is that if you are using ufw as firewall, it sets up the underlying iptables rules with limits so it only logs the 1st 3 connections per min.

Related

Accessing cups from node within a docker container

So I have a node server within a docker container. Right now I would like to have it communicate with the parent system's CUP server. However when I do an ajax call to the server, with port 631 exposed I get a 400 bad request error.
When looking at the CUPS logs it gives this reason for the rejection:
Request from "localhost" using invalid Host: field "host.docker.internal:631"
Now to even access the parent machine I have to use host.docker.internal to gain access, but I have not figured out a way to get cups to ignore the host or think its localhost.
Cups is watching for any serverAlias, and anything on port 631 so it "should" accept the call. Any ideas?
I had the same problem with CUPS (2.3.4) on osx. I spent several hours to fix the invalid Host: field error.
It seems that there's a bug, even when using SeverAlias * on cups conf.
For those who are looking for a workaround:
We have to change the Host header sent from the docker container to localhost in order to do so, I managed to set up an Nginx container listening on port 8888 and rewriting the Host field while proxy_pass to the host’s CUPS server.
This is the nginx conf.d:
server {
listen 8888;
location / {
proxy_pass http://host.docker.internal:631;
proxy_set_header Host localhost;
}}
Now instead of connecting to host.docker.internal:631 we connect the cups client to localhost:8888. (I have set up the nginx sever on the same docker container, you might want to set up a separate container depending on your needs)

HAProxy - LB IP address is not delegated to virtual machines

I am total beginner for HAProxy so please any advice will be much useful.
I have two virtual machines on Microsoft Azure.
They are in virtual network, and they have private IP addresses 10.0.9.4 and 10.0.9.5
I created new Network interface on Microsoft Azure in the same virtual network with IP address 10.0.9.7
Of course this is not delegated to any virtual machines.
Name of interface is : lb.oozie.local, private IP address 10.0.9.7
I added in /etc/hosts on .4 and .5
10.0.9.7 lb.oozie.local
I installed haproxy on both machines 4 and 5.
haconfig file is the following:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
#user haproxy
#group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind lb.oozie.local:80
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server oozie1 10.0.9.4:11000 check
server oozie2 10.0.9.5:11000 check
listen stats lb.oozie.local:1936
stats enable
stats uri /haproxy?stats
I did also:
sudo service haproxy restart
Redirecting to /bin/systemctl restart haproxy.service
Validation returns following:
haproxy -f /etc/haproxy/haproxy.cfg -c
[WARNING] 284/134546 (22658) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 284/134547 (22658) : Server nodes/oozie2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 284/134547 (22658) : sendto logger #1 failed: No such file or directory (errno=2)
[ALERT] 284/134547 (22658) : sendto logger #2 failed: No such file or directory (errno=2)
As I understood my servers should get the LB IP address (10.0.9.7).
I try from 10.0.9.4 and 10.0.9.5 ping to 10.0.9.7
but on both servers I am getting it is not recognized.
ping 10.0.9.7
PING 10.0.9.7 (10.0.9.7) 56(84) bytes of data.
From 10.0.9.4 icmp_seq=1 Destination Host Unreachable
From 10.0.9.4 icmp_seq=2 Destination Host Unreachable
Also if it is relevant:
i installed keepalived mechanism
I did not set public IP address for Load Balancer address, it has only private IP 10.0.9.7, because service is invoked directly from servers 10.0.9.4 and 10.0.9.5
please help.
Thank you in advance,
If you want to use Load Balancer in front of VM's with HA Proxy to create a fault tolerant pair of HA Proxies , you need to create an internal Load Balancer with the frontend IP of 10.0.9.7 (rather than assign 10.0.9.7 to a NIC). It is not possible to ICMP ping the frontend IP of a Load Balancer frontend, you need to use TCP ping instead. Make sure health probes are configured and see a signal from your HA Proxy VM's directly rather than the port HA Proxy is offering up to clients (the result is probably not what you want). Familiarize yourself with Standard Load Balancer at https://aka.ms/lbstandard and take not that an NSG must whitelist ports used with a Standard LB.

As I can, configure the firewall of ubuntu server for the server to accept connections of the terminals through PostgreSQL port 5432

Configuration: Server: Ubuntu server 16.04 LTS using webmin
Terminal: Windows 7 Using PgAmin III
I was unable to establish the connection between my terminal and my server through pgAdmin III on port 5432.
On my server I added:
in file postgresql.conf I edited
in #Connection Settings
listen_addresses = '*'
in file pg_hba.conf I added
in #IPv4 local connections
host all all 172.x.x.x/32 md5 //this is IP Terminal (Hidden x)
I checked the port, this is 5432 default and user is postgres
When I try to establish the connection on PgAdmin III:
Host: //My Server IP (Ping console successful)
Port: 5432
username: postgres
password: //My password
Show me the following message:
Server doesn't listen
The server doesn't accept connections: the connection library reports
could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "Mi SERVER IP Hidden" and accepting TCP/IP connections on port 5432?
If you encounter this message, please check if the server you're trying to contact is actually running PostgreSQL on the given port. Test if you have network connectivity from your client to the server host using ping or equivalent tools. Is your network / VPN / SSH tunnel / firewall configured correctly?
For security reasons, PostgreSQL does not listen on all available IP addresses on the server machine initially. In order to access the server over the network, you need to enable listening on the address first.
For PostgreSQL servers starting with version 8.0, this is controlled using the "listen_addresses" parameter in the postgresql.conf file. Here, you can enter a list of IP addresses the server should listen on, or simply use '*' to listen on all available IP addresses. For earlier servers (Version 7.3 or 7.4), you'll need to set the "tcpip_socket" parameter to 'true'.
You can use the postgresql.conf editor that is built into pgAdmin III to edit the postgresql.conf configuration file. After changing this file, you need to restart the server process to make the setting effective.
If you double-checked your configuration but still get this error message, it's still unlikely that you encounter a fatal PostgreSQL misbehaviour. You probably have some low level network connectivity problems (e.g. firewall configuration). Please check this thoroughly before reporting a bug to the PostgreSQL community.

SonarQube Returning Bad Gateway Error

I'm trying to serve SonarQube using Caddy. I'm able to view the site, but it returns 502 Bad Gateway. The service appears to be up and running. Also curling locally is rejected.
curl
curl -I 0.0.0.0:9000
curl: (7) Failed to connect to 0.0.0.0 port 9000: Connection refused
sonar.properties
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512Mb.
# Use the following property to customize JVM options.
# Recommendations:
#
# The HotSpot Server VM is recommended. The property -server should be added if server mode
# is not enabled by default on your environment:
# http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
# Startup can be long if entropy source is short of entropy. Adding
# -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
# See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
#sonar.web.port=9000
sonar.web.https.port=8999
Caddyfile
https://....com {
tls self_signed
gzip
proxy / 0.0.0.0:9000
}
http://....com {
tls off
gzip
proxy / 127.0.0.1:9000
}
0.0.0.0 is not a routable address. It is used by servers as a "meta-address" to specify that it should listen on all available addresses as opposed to just one. So a server can listen on 0.0.0.0, but a client cannot make requests to 0.0.0.0. Your Caddyfile should look like this:
https://....com {
tls self_signed
gzip
proxy / 127.0.0.1:9000
}
http://....com {
tls off
gzip
proxy / 127.0.0.1:9000
}
And local cURL requests should look like this: curl 127.0.0.1:9000

Node.js application not loadingproperly in Chromium: Connection timed out while reading response header from upstream

I'm running a Node.JS application in the subdomain of WP site. The WP site itself is running on Nginx, php-fpm and Varnish and works just fine, so I'm using Nginx to proxy connections to the Node app.
With Firefox, the Node app works perfectly. The home page and every other page loads, including the admin end. However, on Chromium, the site does not load properly. If I attempt to view the home page, the main content area loads, but the sidebar does not. And I get the following message in the Web console:
WebSocket connection to 'ws://forum.site.com/socket.io/1/websocket/91qNR-mt333a'
failed: Unexpected response code: 502
In the Nginx log file, I see entries like:
2089 upstream prematurely closed connection while reading response header from upstream,
client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/websocket/91qNR-
mt333a HTTP/1.1", upstream: "http://127.0.0.1:4567/socket.io/1/websocket/91qNRaWZ3-
mt333a", host: "forum.site.com"
And if I try to navigate between posts on the site, I get these messages in the Web console:
Failed to load resource: the server responded with a status of 504 (Gateway Time-out)
http://forum.site.com/socket.io/1/xhr-polling/91qNRaWZ3rYcF-mt333a?t=1396434040701
Then these lines from Nginx error log:
2128 upstream timed out (110: Connection timed out) while reading response header from
upstream, client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/xhr-
polling/uH9QTAWUGmomqFoy333e?t=1396434162051 HTTP/1.1", upstream:
"http://127.0.0.1:4567/socket.io/1/xhr-polling/uH9QTAWUGmomqFoy333e?t=1396434162051",
host: "forum.site.com", referrer: "http://forum.site.com/category/35/dual-boots"
I've looked at similar issues on this site and other sites and tried to implement the suggested solutions, but no luck so far. For example, in the Nginx config for the subdomain, I've added the following:
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_connect_timeout 120;
proxy_read_timeout 300;
And played around with different values for the last two lines, but still no luck.
What baffles me is that the site works perfectly on FF. It's only on Chromium that I'm having these problem. I've not tried on IE, but I'm not really concerned about that browser at this point.
I'm sure there's something that I'm overlooking, but I don't know what.
Btw, the site exhibits the same behavior on Android's default browser.
Could Varnish be the culprit here. I have Varnish (port 80) in front of Nginx (8080). Does Varnish play nice with WebSockets?
Finally figured out that the problem is with Varnish, which by default does not handle WebSocket traffic. It has to be explicitly configured for it.
See this link for the solution.

Resources