Nginx does not respond to specific client-public-ip - linux

I'm configuring two Domains to host two websites: dev.example.com and test.example.com
I'm using Nginx as a webserver and my websites, dev and test, are configured with server_name directive as two separate websites sharing the same host Public_IP
When I connect to both domains using a VPN_Public_IP I get response 'websites' from Nginx as Expected
When I connect to both domains using my personal router Public-IP I only get a response from https://dev.example.com, while access_log of https://test.example.com shows this access_log which means the request has reached the server. But with empty error_log and no response in my Browser:
Personal_Router_Pub_IP - - [17/Feb/2022:07:02 +0000] "GET / HTTP/2.0" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98 Safari/537.36"
Is it a Domain-blacklisting issue or Client-IP-blacklisting issue, and how to identify the problem here?

Yesh the issue was that the domain is blacklisted by Router ISP following the regulations of UAE where I reside

Related

perl script not executed in one three virtual hosts configured the same

I have three virtual hosts on apache2 web server.
Two of them use perl scripts that are working perfectly.
The third I just created with EXACTLY THE SAME configuration concerning the ScriptAlias directive
Number one: working
ScriptAlias /cgi-bin/ "/www/old/uep/cgi-bin/"
Number two: working
ScriptAlias /cgi-bin/ "/www/cssm/formulaire/cgi-bin/"
Number three: not working
(the perl script is about to be downloaded instead of being executed as the two others)
ScriptAlias /cgi-bin/ "/www/cssm/juin2019/cgi-bin/"
All the hosts are configured the same, all the scripts have sufficient rights to be executed, but only the last can not be executed.
Checked logs: no errors, the access log file indicates GET concerning the script with .pl extension and with execution permission.
Emptied the browser cache (everything).
Kompared the three involved .conf files in /etc/apache2/vhosts.d
All of the three .conf files are the same, with no difference but the path and the error/access log names.
I use the following settings in the three .conf files concerning the main directory
Options Indexes FollowSymLinks
IndexOptions +Charset=UTF-8 NameWidth=*
I don't use symbolic links in the path.
In the HTML file I use a FORM for one of the two site that are working, and a direct link /cgi-bin/forum.pl for the other working site.
NOT WORKING:
192.168.0.4 - - [02/Apr/2019:19:32:54 +0200] "GET /cgi-bin/examenjuin.pl HTTP/1.1" 304 - "http://www.examenjuin2019.cssm/" "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
WORKING:
192.168.0.4 - - [02/Apr/2019:19:51:38 +0200] "GET /cgi-bin/forum.pl HTTP/1.1" 200 2209 "http://www.uepsoundsystem.dezordi.world/" "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
Can not understand why two perl scripts in different folders with exactly the same permissions are working and this one can't...
If it is not your script producing the 304 status code, it is your server configuration.
On Apache, play around with mod_cache settings to prevent your server from sending them.

How to block a browser by full name including compatible?

I have many requests get\post from different ip with the browser Mozilla 5.0 compatible MSIE 9.0 on the main page of website. I don't want to block Mozilla fully, I need to block only this occurance. Can I do it?
In my apache logs it looks like:
172.68.25.54 - - [19/Sep/2018:18:00:32 +0300] "GET / HTTP/1.0" 200 11059 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0)"
If I use this rule:
BrowserMatchNoCase "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0)" bad_br
Deny from env=bad_br
It doesn't work, I think it's just because of un-quoted string or something else...
The first parameter of BrowserMatchNoCase is not an ordinary string but a regex (regular expression). Bracket symbols are the special symbols in regex and need to be escaped with backslash if you want to match them in a string:
BrowserMatchNoCase "Mozilla/5.0 \(compatible; MSIE 9.0; Windows NT 6.0\)" bad_br

Redirect with basic auth to another website (in node)

How to redirect to another website with basic auth (in node)
Here is my code
const headers = {
Authorization: "Basic " +
new Buffer(USER + ":" + PASS).toString("base64")
};
ctx.response.set(headers);
ctx.response.redirect(URL)
The First Response return with basic auth
Authorization →Basic QWRxXxXxYWRtaW4=
Connection →keep-alive
Content-Length →111
Content-Type →text/html; charset=utf-8
Date →Fri, 15 Dec 2017 21:49:57 GMT
Location →http://localhost:8080/edit/data/P0000013
The following redirected GET request doesn't contain basic auth and got redirected again, to a log-in page.
# General
Request URL:http://localhost:8080/edit/data/P0000013
Request Method:GET
Status Code:302 Found
Remote Address:[::1]:8080
Referrer Policy:no-referrer-when-downgrade
# Request Header
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
Connection:keep-alive
Cookie:....
Host:localhost:8080
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36
You can specify username & password as part of URL:
ctx.redirect('http://username:password#example.com');
You CANNOT redirect with attached headers (including basic auth), HTTP protocol doesn't support it.
But you can put your basic auth key/value pair in the new url as an argument:
HTTP/1.x 302 Found
Location: /api?auth=asdf
Or save it in cookies
HTTP/1.x 302 Found
Location: /api
Set-Cookie: auth=asdf

nginx + Express: etag caching doesn't work

Given a nodejs application with express, which goes through nginx. I'm trying to add cache support with etags.
Without nginx, if the application get called directly it works. I set the If-None-Match Header and receive a 304.
With nginx, the response is always 200.
My Nginx config:
location /app/ {
proxy_pass http://app;
}
Log entry from express.
info: HTTP GET /app/ statusCode=200, url=/app/, connection=upgrade,
host=11.1.1.1, accept=application/json, text/plain, /,
user-agent=Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36,
referer=somesite.com, accept-language=en-US,en;q=0.8,de;q=0.6,
if-none-match=W/"29ae92-4sHBxs6sPcMB3/GypUtubLN0HQ8-gzip",
x-forwarded-proto=http, cookie=io=XAMR4ZH1TzxIvWzkAAAA,
x-forwarded-for=10.43.212.26, x-forwarded-host=somesite.com,
x-forwarded-server=somesite.com, method=GET, httpVersion=1.1,
originalUrl=/app/, responseTime=352
You should enable HTTP/1.1 in the NGINX configuration:
location /app/ {
proxy_http_version 1.1;
proxy_pass http://app;
}

nginx access.log long comma separated GET requests

I am not sure yet, but I am seeing lots of what looks like attacks using the pagepeeker API (based on the source IP addresses). Granted it could be a problem with my nginx, haproxy load balancer, or php-fpm configurations.
Has anyone seen something similar? I have removed my domain name and replaced with an example domain in the log example below.
144.76.235.110 - - [06/Jul/2014:01:20:15 +0100] "GET /wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/,%20https:/www.domain.org/wp-admin/network/ HTTP/1.1" 301 5 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4"
this is one of the virtual host files in question. Have have not seen a re-occurrence of the errors as yet.
NOTE: My SSL termination is done by HAPROXY.
server {
listen 127.0.0.1:8080;
server_name ftmon.org;
port_in_redirect off;
return 301 $real_scheme://www.ftmon.org$request_uri;
}
server {
listen 127.0.0.1:8080;
port_in_redirect off;
server_name www.ftmon.org;
root /home/wpmsite/domains/ftmon.org/public_html;
access_log /home/wpmsite/domains/ftmon.org/logs/access.log main;
error_log /home/wpmsite/domains/ftmon.org/logs/error.log error;
index index.php index.html index.htm;
autoindex off;
pagespeed on;
# Allow pagespeed to bypass the load balancer (HAPROXY)
pagespeed MapOriginDomain http://127.0.0.1:8080 https://www.ftmon.org;
pagespeed MapOriginDomain http://127.0.0.1:8080 http://www.ftmon.org;
# Allow pagespeed to bypass nginx
# pagespeed LoadFromFile http://www.ftmon.org /home/wpmsite/domains/ftmon.org/public_html;
# pagespeed LoadFromFile https://www.ftmon.org /home/wpmsite/domains/ftmon.org/public_html;
include /etc/nginx/common/pagespeed.conf;
# Rewrites for my reset site move to fresh multisite install.
rewrite ^(/files/)(.*)$ /wp-content/uploads/$2 permanent;
rewrite ^(/wp-content/blogs.dir/22/files/)(.*)$ /wp-content/uploads/$2 permanent;
location / {
try_files $uri $uri/ #wpmulti #memcached /index.php$is_args$args;
}
include /etc/nginx/common/locations.conf;
include /etc/nginx/common/wpcommon.conf;
#
# caching solutions
#
# include /etc/nginx/common/wordfence.conf;
include /etc/nginx/common/wpffpc.conf;
}
I solved my problem by disabling Pagespeed.
pagespeed off;
Also, in wordpress I added the following to wp-config.php
define('ADMIN_COOKIE_PATH', '/');
define('COOKIE_DOMAIN', '');
define('COOKIEPATH', '');
define('SITECOOKIEPATH', '');
I might wait for nginx pagespeed code base to stabilize more before revisiting using it again with wordpress multisite when using a network plugin such as network+.
NOTES
The redirect loop only impacted the main site in multisite wordpress with the network+ plugin installed. The other sites in the multisite were fine.
The problem only occurred with thumbnail services (and only with some of them), which made me initially think someone was using these services for attacks.
The website thumbnail service was caching pages (even error pages), so it was very difficult to debug and test possible fixes.
It's interesting that the redirect loops contained a single / for the http/https redirect addresses i.e. ,%20https:/ rather than ,%20:https//.

Resources