How to return empty response with Varnish? - security

When a request come on a Varnish server, I would like to return an empty response or simply close the connection, if the requested server name is not known.
For example from nginx (the backend side of Varnish) I did that:
server {
listen 80 default_server;
listen [::]:80 default_server;
return 444;
}
server {
listen 80;
listen [::]:80;
server_name my.example.org
}
So, when an user/robot come on this nginx server with an address IP or an unknown host, it got: The connection was reset.
How do I?
With this configuration on nginx side and nothing more on Varnish side, if I try to access to the Varnish server with his public IP, I have: Error 503 Backend fetch failed - Backend fetch failed - Guru Meditation.
Perhaps there is a possibility on Varnish side, when response from backend (nginx) is 444, to simply close the connexion.
varnishlog says:
- BereqMethod GET
- BereqURL /
- BereqProtocol HTTP/1.1
...
- BereqHeader X-Varnish: 1540833
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 33 default X.X.X.X 80 X.X.X.X 34862
...
- FetchError HTC eof (-1)
- BackendClose 33 default
...
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Backend fetch failed
- BerespHeader Date: Fri, 10 Feb 2023 10:10:48 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
I want to "process" this error.

If Varnish is hosted on the same machine as your Nginx server, Varnish should be listening on port 80 and Nginx on port 8080.
Once Varnish can reach Nginx, the Backend fetch failed issue will go away.
In Varnish you don't need to configure anything special, whatever Nginx returns, Varnish will handle. However, if you want to handle this in Varnish before Nginx is reached, you could use the following VCL code:
sub vcl_recv {
if(req.http.Host != "my.example.org") {
return(synth(403));
}
}
This assumes that my.example.org is the right Host header. This also assumes that returning a synthetic 403 Forbidden is an acceptable return value.

Related

Docker + Nginx: Websocket returns 404 not found

I'm struggling on implementing a websocket connection between a SSL Server and the client.
Architecture:
Proxy: Nginx
Host: Docker (Swarm)
Webserver: Node.js (express)
Client (Postman, later vue.js)
Nginx settings (app.conf):
server {
listen 443;
listen [::]:443;
client_max_body_size 100M;
server_name search.app search.app.host.ads;
location / {
proxy_pass http://search-service:3020; # docker container
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
search-server (server.js)
async function startServer() {
// variable definition
server = express();
// set http server
let httpServer = http.createServer(server);
// init loaders -> websocket is defined here
loaders(server, httpServer);
}
search-service (websocket.js) (how is the websocket created?)
let wsSearch = new websocket.Server({ server: httpServer, path: "/socket/websocketSearch" });
The websocket is working properly on localhost using this url
ws://localhost:3020/socket/websocketSearch
After deploying on production site, the url will be
wss://search.app.host.ads/socket/websocketSearch
Trying to connect to production websocket using Postman returns following error:
Error: Unexpected server response: 404
Handshake Details
Request URL: https://search.app.host.ads/socket/websocketSearch
Request Method: GET
Status Code: 404 Not Found
Request Headers
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: gzRuxZ2QYTOladlXSenjmw==
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Host: search.app.host.ads
Response Headers
Server: nginx/1.19.4
Date: Tue, 03 Aug 2021 13:07:45 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 161
Connection: keep-alive
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: "nosniff"
Which package do I use to implement websocket?
WS Version 7.5.3
I've followed several instructions like Nginx or related StackOverflow issues. However, I didn't manage to connect to my websocket.
Do you have any idea, where is my fault?
Thanks in advance.
If any further information is needed, I try to provide it.
Best regards

Getting 502 Bad Gateway error with ngrok when I use https localhost url in a Node App

I'm developing a Node App. I need https for receiving callback URLs from 3rd party Apps. So I added SSL certificate.
ngrok works only with http URL (http://localhost:3000).
I'm using the command ngrok http 3000. But when I access ngrok https URL, I'm getting 502 Bad Gateway error in browser.
How do I make ngrok work with https://localhost:3000 URL.
If you are using for signup or login with google/facebook then I can suggest you another way. You can use
https://tolocalhost.com/
configure how it should redirect a callback to your localhost. This is only for development purposes.
ngrok can itself provide https support - this is one of its major use cases (at least for me) so you don't need to create any ssl certificates
Step-by-step guide
Here's a simple testing file:
$ cat t.html
<body>
<h1>test</h1>
</body>
Bringing it up a simple http server on localhost:
python -m SimpleHTTPServer 7070
Running ngrok
$ ngrok http 7070
grok by #inconshreveable (Ctrl+C to quit)
Session Status online
Session Expires 7 hours, 59 minutes
Update update available (version 2.2.8, Ctrl-U to update)
Version 2.2.4
Region United States (us)
Web Interface http://127.0.0.1:4040
Forwarding http://4580e823.ngrok.io -> localhost:7070
Forwarding https://4580e823.ngrok.io -> localhost:7070
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
Checking
curl -D - https://4580e823.ngrok.io/t.html
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.10
Date: Tue, 23 Oct 2018 20:03:45 GMT
Content-type: text/html
Content-Length: 33
Last-Modified: Tue, 23 Oct 2018 19:53:09 GMT
Connection: keep-alive
<body>
<h1>test</h1>
</body>
That's it

Tcp request failed with status 400 in Elastic Beanstalk Node Server with Nginx Proxy Server?

> 14.195.188.230 - - [18/Mar/2017:16:43:11 +0000] "(004026579154BP05000004026579154111213V0000.0000N00000.0000E000.0000000000.0010000000L0000021C)" 400 173 "-" "-" "-"
This is the error message that i received
HTTP/1.1 400 Bad Request
Server: nginx/1.10.1
Date: Sun, 19 Mar 2017 02:19:35 GMT
Content-Type: text/html
Content-Length: 173
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.10.1</center>
</body>
</html>
This is the log that i can see in my nginx access log, I need this data in my node server.
(004026579154BP05000004026579154111213V0000.0000N00000.0000E000.0000000000.0010000000L0000021C)
1) I am using elastic Beanstalk, i don't know how can i pass the above value to my node server module? If its possible to get that value as http or https request inside my node express module.
2) If i have to run a net server, than in which port should i listen for tcp, and how nginx will know about that port, for http server port, i use process.env.port

nginx 502 Bad Gateway on big file uploading

I have a server with apache web server and nginx as proxy. If I'd like to upload a 150MB file, it works without any trouble. But If I try to upload a 350MB file (or larger, I must to upload up to 2GB files) I get nginx 502 Bad Gateway error.
I'm using plesk, and I added these directives to nginx config for testing:
proxy_buffer_size 256k;
proxy_buffers 8 512k;
proxy_busy_buffers_size 512k;
fastcgi_buffers 8 512k;
fastcgi_buffer_size 512k;
And I have increased the client_max_body_size directive too.
I get this error always:
2015/04/19 11:36:09 [error] 31924#0: *43126352 upstream prematurely closed connection while reading response header from upstream, client: x.x.x.x, server: example.com, request: "POST /uptest HTTP/1.1", upstream: "http://x.x.x.x:7080/uptest", host: "example.com", referrer: "http://example.com/uptest"
What should I change?
The FcgidMaxRequestLen or FcgidMaxRequestInMem directives is not large enough, causing the limit to be triggered in many cases (http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html). FcgidMaxRequestInMem is required to configure due to bug in Apache (bug https://issues.apache.org/bugzilla/show_bug.cgi?id=51747)
Edit fcgid.conffile, which is depending on your linux version could be located in /etc/httpd/conf.d/ or /etc/apache2/mods-available/
Set FcgidMaxRequestLen and FcgidMaxRequestInMem with the same values and then restart Apache.

Too Many Redirects on OpenShift after push

I have a node.js application running on openshift. After testing my code on a local environment I pushed it up to my instance on openshift. After doing so, I went to check those changes on the public site and my browser reported that I was getting too many redirects. I tried to look at my haproxy status and even that was getting too many redirects.
I have done some investigation and here is what I've found:
I checked my nodejs logs and my node server started successfully (no errors)
I've ssh'd into my machine and ran curl -vvv $OPENSHIFT_NODEJS_IP:8080 and I was returned my index.html as I should.
When I run curl -vvv http://minutepolitics-minutepolitics.rhcloud.com/ I get this response:
RESPONSE:
Hostname was NOT found in DNS cache
Trying 54.81.203.46...
Connected to minutepolitics-minutepolitics.rhcloud.com (54.81.203.46) port 80 (#0)
GET / HTTP/1.1
User-Agent: curl/7.37.1
Host: minutepolitics-minutepolitics.rhcloud.com
Accept: */*
HTTP/1.1 302 Found
Date: Thu, 23 Oct 2014 03:26:06 GMT
Server Apache/2.2.15 (Red Hat) is not blacklisted
Server: Apache/2.2.15 (Red Hat)
Vary: Host
X-Powered-By: PHP/5.3.3
Location: http://minutepolitics-minutepolitics.rhcloud.com/
Connection: close
Accept-Ranges: none
Content-Length: 0
Content-Type: text/html
Closing connection 0
Also, when I ssh into my machine and run /etc/init.d/haproxy start the output is: Starting haproxy: [ALERT] 294/230821 (134951) : Starting frontend main: cannot bind socket [FAILED]
From here, I don't know what to do or try to get this working again.
Any and all help will be greatly apprecaited! Thanks!!

Resources