Varnish Grace Period - varnish

These are my settings for a grace period (/etc/varnish/default.vcl)
sub vcl_recv {
....
set req.grace = 360000s;
...
}
sub vcl_fetch {
...
set beresp.grace = 360000s;
...
}
I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says:
Error 503 Service Unavailable
Service Unavailable
Guru Meditation:
XID: 1890127100
Varnish cache server
Could you tell me what could be the problem?

Note that only your cached content will be serve in the grace delay.
Could you try to request several times the request when the backend is alive, and be sure with varnishlog for example, that as long as you're in the TTL delay no more backend connections are used.
Then, disconnect your backend, wait until you reach the content's TTL, and requery the initial request.

Related

What happens if you don't end the connection in Express?

What the title says. I ran into a bug where the issue was an express endpoint not ending the request connection which caused it to seemingly hang. I am still confused how the request and response flow looks like.
The Express http server object has a configurable timeout and after that timeout with no response on the http connection, the server will close the socket.
Similarly, most http clients at the other end (such as browsers) have some sort of timeout that will likely close the TCP socket if they've been waiting for a response for too long.
The http server timeout is built into the underlying http server object that Express uses and you can see how to configure its timeout here: Express.js Response Timeout. By default, the nodejs http server timeout is set to 0 which means "no timeout" is enforced.
So, if you have no server timeout and no client timeout, then the connection will just sit there indefinitely.
You can configure your own Express timeout with:
// set server timeout to 2 minutes
server.timeout = 1000 * 60 * 2;
See doc here.
Where server is the http server object created by either http.createServer() or app.listen().

AWS ElasticBeanstalk NodeJS - 502 error: recv() failed (104: Connection reset by peer) while reading response header from upstream

We are planning to migrate our NodeJS platform from plain EC2 to ElasticBeanstalk. During these process, after some struggles, we have deployed our app and able to access and perform actions. However, for some requests, we received 502 error.
After checking the logs we found below;
/var/log/nginx/error.log
2020/03/16 06:12:09 [error] 3009#0: *119488 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: xxx.xx.xx.xxx, server: , request: "POST /www_auth/register HTTP/1.1", upstream: "http://127.0.0.1:8081/register", host: "****.us-east-2.elasticbeanstalk.com"
It occurs in randomly and I don't have any clue. I feel some configuration level changes I missed/need to add with nginx.
If you have any steps/suggestions to solve this, appreciate it!
AWS Elastic Load Balancer pre-connects to backend servers, and it can cause a race condition where ELB thinks a connection is open, but the Node.js backend has already closed it, due to the server.keepAliveTimeout of 5 seconds of idle inactivity, the default value in Node.js 8.x and newer.
Disable server.keepAliveTimeout and server.headersTimeout to work around this issue, or set these timeouts to a ms value larger than the AWS ELB's Idle Timeout value.
const app = express();
// Set up the app...
const server = app.listen(8080);
// Disable both timeouts
server.keepAliveTimeout = 0;
server.headersTimeout = 0;
Credit for this solution goes to Shuhei Kagawa:
https://shuheikagawa.com/blog/2019/04/25/keep-alive-timeout/

Issue with using keepAlive if set for 20 seconds or more

Having a bit of trouble with keepAlive for Apollo subscriptions. When ever I set a time to seconds or more the listening subscriptions errors out.
{
"error": "Could not connect to websocket endpoint ws://website.test:8000/graphql. Please check if the endpoint url is correct."
}
Here is ApolloServer setup
const apollo = new ApolloServer({
introspection: true,
playground: true,
typeDefs: schema,
subscriptions: {
keepAlive: 40000,
},
resolvers,
context: ........
}
In my local environment when I do not set keepAlive it will stay open indefinitely. If I set it at 10000 works great. With keep alive set at 40000 I get the error and connection closes
UPDATE
One thing we just noticed is that this issue happens on the playground but not on our web app. Maybe just a playground thing?
If you check the documentation provided by Apollo GraphQL:
keepAlive - Number
The frequency with which the subscription endpoint should send
keep-alive messages to open client connections, in milliseconds, if
any.
If this value isn't provided, the subscription endpoint doesn't send
keep-alive messages.
As per your provided configuration, with keep alive set at 40000, this is equivalent to 40 seconds. So the subscription endpoint is pinging keep-alive messages every 40 seconds. This probably is too long and the connections is already closed.
You should use a smaller value
If none if these help I suggest you open a issue on the Apollo repository:
I had the same issue just a few days ago. I resolved this by also adding the path option. Also declared the path inside of express middleware - https://www.reddit.com/r/graphql/comments/fpb3yt
WebSocket KeepAlive: WebSockets ping/pong, why not TCP keepalive?
How to handle WebSocket connection loss: Handling connection loss with websockets
Similar: Apollo Server - GraphQL's subscription websocket connection doesn't persist

server side events behind varnish-cache, not sending message or messages never pushed

I have a backends that use Redis pub/sub that publish messages to subscribers. this is working very well in NGINX. but when i place a varnish in front of my NGINX, messages never pushed to browsers although they are being published by the go-servers.
my config foro varnish is default installed out from apt-get install, using VCL config. I updated the default config to point to my NGINX
backend default {
.host = "NGINX_url";
.port = "80";
}
other than this, i left it commented.
Sorry if I have asked this twice, from the forums and here. I think varnish is a great and awesome software and I'm eager to implement this on our production apps.
thank you in advance
When pushing messages from the server to the browsers i suppose you are using a websocket. To use websockets with varnish you have to setup the following you your vcl
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
}
sub vcl_recv {
if (req.http.Upgrade ~ "(?i)websocket") {
return (pipe);
}
}
https://www.varnish-cache.org/docs/3.0/tutorial/websockets.html

How to kill a connection in nodejs

I have a homework assignment to build an http server using only node native modules.
I am trying to protect the server from overloading, so each request is hashed and stores.
If a certain request reaches a high number, say 500, I call socket.destroy().
Every interval (one minute) I restart the hash-table. Problem is that when I do a socket that was previously dead is now working again. The only thing I do each interval is requests = {}, and nothing to do with the connections.
Any ideas why the connection is live again? Is there a better function to use than destroy()?
Thanks
Destroying the socket won't necessarily stop the client from retrying the request with a new socket.
You might instead try responding minimally with just a non-OK status code:
if (requests[path] >= 500) {
res.statusCode = 503;
res.end();
}
And, on the 503 status code:
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.

Resources