nginx / sails.js: incomplete file upload - node.js

We are developing an app using sails.js.
In this app we have an upload controller:
https://github.com/makinacorpus/mnhn_bai/blob/master/api/controllers/Object3DController.js
This controller use skipper under the hood, as explained in the documentation.
Now the problem is that when we upload big files, they are incompletely stored, the uploaded size is never the same and varies from 7mb to 14mb for a 15 mb file.
The architecture is as follow:
haproxy -> nginx -> node.js/sails.
If we replace the nginx reverse proxy by a simple apache + proxypass configuration, the uploads work flawlessly.
If we replace the node.js app by a simple python upload controller (in flask, eg) the upload is also showing the correct length and data.
Of course nginx has been correctly configured for the buffer sizes, client_body_timeout, and client_max_body_size, and as i said, the flask have is just receiving correctly the upload.
The upload with the nginx app result in a 200 response, so it seems the file was uploaded, but in fact, on the disk, the file is incomplete.
On the nginx debug log we can see that:
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header:
"POST /admin/edit_object/6 HTTP/1.1^M
Host: xxxxxx.makina-corpus.net^M
X-Real-IP: xxxx^M
X-Forwarded-For: xxxxx^M
X-NginX-Proxy: true^M
X-Forwarded-Proto: http^M
Connection: upgrade^M
Content-Length: 15361775^M
Origin: http://bai.makina-corpus.net^M
User-Agent: Mozilla/5.0 (Unknown; Linux x86_64) AppleWebKit/534.34 (KHTML, like Gecko) CasperJS/1.1.0-beta3+PhantomJS/1.9.8 Safari/534.34^M
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryRt4v4f7RkrlzUEX2^M
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M
Referer: http://xxxxxxxxxx.makina-corpus.net/admin/edit_object/6^M
Cookie: sails.sid=s%3Akv_Gxxxxxxxx2F5iaDWA^M
Accept-Encoding: gzip^M
Accept-Language: en,*^M
Authorization: Basic xxxx=^M
^M
"
2014/12/03 01:57:23 [debug] 39583#0: *1 http cleanup add: 00000000011CC520
2014/12/03 01:57:23 [debug] 39583#0: *1 init keepalive peer
2014/12/03 01:57:23 [debug] 39583#0: *1 get keepalive peer
2014/12/03 01:57:23 [debug] 39583#0: *1 get rr peer, try: 1
2014/12/03 01:57:23 [debug] 39583#0: *1 get keepalive peer: using connection 0000000001156018
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream connect: -4
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream send request
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer buf fl:0 s:806
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer buf fl:1 s:15361775
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer in: 00000000011CC5C0
2014/12/03 01:57:23 [debug] 39583#0: *1 tcp_nopush
2014/12/03 01:57:23 [debug] 39583#0: *1 writev: 806
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: #0 15361775
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: 2776864, #0 2776864:15361775
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer out: 00000000011CC5D0
2014/12/03 01:57:23 [debug] 39583#0: *1 event timer add: 35: 60000:1417568303245
2014/12/03 01:57:23 [debug] 39583#0: *1 http run request: "/admin/edit_object/6?"
2014/12/03 01:57:23 [debug] 39583#0: *1 http request empty handler
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream request: "/admin/edit_object/6?"
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream send request handler
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream send request
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer in: 00000000011CC5D0
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: #2776864 12584911
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: 2488810, #2776864 2488810:12584911
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer out: 00000000011CC5D0
2014/12/03 01:57:23 [debug] 39583#0: *1 event timer del: 35: 1417568303245
2014/12/03 01:57:23 [debug] 39583#0: *1 event timer add: 35: 60000:1417568303254
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream request: "/admin/edit_object/6?"
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream process header
2014/12/03 01:57:23 [debug] 39583#0: *1 malloc: 00000000011CD000:262144
2014/12/03 01:57:23 [debug] 39583#0: *1 recv: fd:35 369 of 262144
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy status 200 "200 OK"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "X-Powered-By: Sails <sailsjs.org>"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Access-Control-Allow-Origin: "
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Access-Control-Allow-Credentials: "
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Access-Control-Allow-Methods: "
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Access-Control-Allow-Headers: "
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Content-Type: application/json; charset=utf-8"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Content-Length: 33"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Vary: Accept-Encoding"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Date: Wed, 03 Dec 2014 00:57:23 GMT"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "Connection: keep-alive"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header done
2014/12/03 01:57:23 [debug] 39583#0: *1 uploadprogress error-tracker error: 0
2014/12/03 01:57:23 [debug] 39583#0: *1 xslt filter header
2014/12/03 01:57:23 [debug] 39583#0: *1 HTTP/1.1 200 OK^M
Server: nginx^M
Date: Wed, 03 Dec 2014 00:57:23 GMT^M
The problem seems that skipper never seems to hit the 'finish' event at the upstream level, infinite loop ?
The sails stdout
Parser: Done reading textparam through field `gallery`
Parser: Done reading textparam through field `category`
Parser: Done reading textparam through field `copyright`
Parser: Done reading textparam through field `published`
Parser: Done reading textparam through field `filename_3D`
Parser: Done reading textparam through field `filename_flat`
Parser: Done reading textparam through field `preview`
Parser: Done reading textparam through field `preview_animated`
Something is trying to read from Upstream `media_files`...
Passing control to app...
User allowed : admin ( 1 )
RenamerPump:
• dirname => undefined
• field => media_files
• fd => 04cb80ba-dce6-4a1d-9b54-ac8b08ca3e06
100
100
100
100
100
100
100
100
100
100
100
The interesting thing is that the file on the disk after upload contains at the end the headers from another unrelated request:
07bb890: 3130 3130 3130 3130 3130 3130 3130 3130 1010101010101010
07bb8a0: 3130 3130 3130 3130 3130 3130 3130 3130 1010101010101010
07bb8b0: 3130 3130 3130 3130 3130 3130 3130 3130 1010101010101010
07bb8c0: 3130 3130 3130 3130 3130 3130 4745 5420 101010101010GET
07bb8d0: 2f20 4854 5450 2f31 2e31 0d0a 486f 7374 / HTTP/1.1..Host
07bb8e0: xxxx xxxx xxxx xxxx xxxx xxxx 2d63 6f72 : xxx.makina-cor
07bb8f0: 7075 732e 6e65 740d 0a58 2d52 6561 6c2d pus.net..X-Real-
07bb900: 4950
07bb910: 3134 0d0a 582d 466f 7277 6172 6465 642d 14..X-Forwarded-
07bb920: 466f For: xxxxxxxxxxx
07bb930: 2e31
07bb940: 2e31 340d 0a58 2d4e 6769 6e58 2d50 726f .14..X-NginX-Pro
07bb950: 7879 3a20 7472 7565 0d0a 582d 466f 7277 xy: true..X-Forw
07bb960: 6172 6465 642d 5072 6f74 6f3a 2068 7474 arded-Proto: htt
07bb970: 700d 0a43 6f6e 6e65 6374 696f 6e3a 2075 p..Connection: u
07bb980: 7067 7261 6465 0d0a 5573 6572 2d41 6765 pgrade..User-Age
07bb990: 6e74 3a20 4d6f 7a69 6c6c 612f 352e 3020 nt: Mozilla/5.0
07bb9a0: 2855 6e6b 6e6f 776e 3b20 4c69 6e75 7820 (Unknown; Linux
07bb9b0: 7838 365f 3634 2920 4170 706c 6557 6562 x86_64) AppleWeb
07bb9c0: 4b69 742f 3533 342e 3334 2028 4b48 544d Kit/534.34 (KHTM
07bb9d0: 4c2c 206c 696b 6520 4765 636b 6f29 2043 L, like Gecko) C
07bb9e0: 6173 7065 724a 532f 312e 312e 302d 6265 asperJS/1.1.0-be
07bb9f0: 7461 332b 5068 616e 746f 6d4a 532f 312e ta3+PhantomJS/1.
07bba00: 392e 3820 5361 6661 7269 2f35 3334 2e33 9.8 Safari/534.3
07bba10: 340d 0a41 6363 6570 743a 2074 6578 742f 4..Accept: text/
07bba20: 6874 6d6c 2c61 7070 6c69 6361 7469 6f6e html,application
07bba30: 2f78 6874 6d6c 2b78 6d6c 2c61 7070 6c69 /xhtml+xml,appli
07bba40: 6361 7469 6f6e 2f78 6d6c 3b71 3d30 2e39 cation/xml;q=0.9
07bba50: 2c2a 2f2a 3b71 3d30 2e38 0d0a 4163 6365 ,*/*;q=0.8..Acce
07bba60: 7074 2d45 6e63 6f64 696e 673a 2067 7a69 pt-Encoding: gzi
07bba70: 700d 0a41 6363 6570 742d 4c61 6e67 7561 p..Accept-Langua
07bba80: 6765 3a20 656e 2c2a 0d0a 4175 7468 6f72 ge: en,*..Author
07bba90: 697a 6174 696f 6e3a 2042 6173 6963 2063 ization: Basic c
07bbaa0: 6d39 xxxx xxxx xxxx xxxx 3d0d 0a xxxx=..
(END)
And in other requests, we have not some other requests headers, but just an incomplete file.
Here, the missing bits are from the end of the original file, the start is always correct.
Note that the main difference with apache is that nginx is sending data is quick big bursts to the sails app. On the contrary apache is streaming the request.
This is because nginx does request buffering.
If someone has an idea from where to continue in skipper to dig out that upload problem !
If i replace the save method by this example, i see that the bits coming from nginx are written correctly, i have the full and correct file in the POSTed data, so the error is clearly somewhere in skipper request consumption
var body = "";
req.on('data', function (chunk) {
body += chunk;
});
req.on('end', function () {
console.log('POSTed: ' + body.length);
console.log('POSTed: ' + body.slice(-400));
res.writeHead(200);
res.end('<html/>');
});

From the nginx debug log it seems that the problem is due to early return of response from the backend - note that in the last sendfile() call nginx was able to send only 2488810 out of 12584911 bytes it tried to:
...
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer in: 00000000011CC5D0
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: #2776864 12584911
2014/12/03 01:57:23 [debug] 39583#0: *1 sendfile: 2488810, #2776864 2488810:12584911
2014/12/03 01:57:23 [debug] 39583#0: *1 chain writer out: 00000000011CC5D0
2014/12/03 01:57:23 [debug] 39583#0: *1 event timer del: 35: 1417568303245
2014/12/03 01:57:23 [debug] 39583#0: *1 event timer add: 35: 60000:1417568303254
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream request: "/admin/edit_object/6?"
2014/12/03 01:57:23 [debug] 39583#0: *1 http upstream process header
2014/12/03 01:57:23 [debug] 39583#0: *1 malloc: 00000000011CD000:262144
2014/12/03 01:57:23 [debug] 39583#0: *1 recv: fd:35 369 of 262144
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy status 200 "200 OK"
2014/12/03 01:57:23 [debug] 39583#0: *1 http proxy header: "X-Powered-By: Sails <sailsjs.org>"
...
And the backend returned an 200 OK answer. At this point nginx thinks that there is no reason to send the rest of the request body and stops sending it - this what causes incomplete uploaded files. Additionally, you have keepalive upstream connections configured, and you are hitting this bug - and this is why you see headers of an unrelated request.
Teaching the backend code to only send a response after the request is fully read, as in your test code, should resolve the problem.

So, the solution i found is to hack a bodyparser which use formidable.
No more problem :).
For the record, it was a bit of a hack to switch the bodyparser in the middlewares:
config/http.js
module.exports.http = {
middleware: {
bodyParser: false,
cbodyParser: require('../bodyParser')(
{urls: [/\/admin\/edit_object/]}),
order: [
'startRequestTimer',
'cookieParser',
'session',
'cbodyParser',
'handleBodyParserError',
'compress',
'methodOverride',
'poweredBy',
'$custom',
'router',
'www',
'favicon',
'404',
'500'
],
}
};
bodyparser.js:
/**
* Module dependencies
// Configure body parser components
*/
var _ = require('lodash');
var util = require('util');
var formidable = require('formidable');
function mime(req) {
var str = req.headers['content-type'] || '';
return str.split(';')[0];
}
function parseMultipart(req, res, next) {
req.form = new formidable.IncomingForm();
req.form.uploadDir = sails.config.data.__uploadData;
req.form.maxFieldsSize = sails.config.maxsize;
req.form.multiple = true;
// res.setTimeout(0);
req.form.parse(req, function(err, fields, files) {
if (err)
return next(err);
else {
req.files = files;
req.fields = fields;
req.body = extend(fields, files);
next();
}
});
}
function extend(target) {
var key, obj;
for (var i = 1, l = arguments.length; i < l; i++) {
if ((obj = arguments[i])) {
for (key in obj)
target[key] = obj[key];
}
}
return target;
}
function disable_parser(opts, req, res) {
var matched = false;
try {
var method = null;
try {method = req.method.toLowerCase();}
catch(err){ /* */}
if(method) {
_(opts.urls).forEach(function(turl) {
if (method === 'post' && req.url.match(turl)) {
// console.log("matched"+ req.url);
if(!matched) matched = true;
};});
}
} catch(err) { debug(err);/* pass */ }
return matched;
}
module.exports = function toParseHTTPBody(options) {
options = options || {};
var bodyparser = require('skipper')(options);
// NAME of anynonymous func IS IMPORTANT (same as the middleware in config) !!!
return function cbodyParser(req, res, next) {
var err_hdler = function(err) {};
if (disable_parser(options, req, res) && mime(req) == 'multipart/form-data') {
return parseMultipart(req, res, next);
} else {
return bodyparser(req, res, next);
}
};
};
Indeed, sails let to think that we can override the bodyParser, but we cant as it will result in an anonymous function but the express router only map "named" function...

We faced a similar issue. I dont know if our solution will work for you or not, but here goes.
For very large files, the csrf gets left out of request packet. So we need to send the csrf in request header rather than request body. For that we changed the XMLHttpRequest a little bit.
/*
Putting csrf in Header as some large
files need this mechanism to upload
*/
(function() {
var send = XMLHttpRequest.prototype.send,
token = csrfToken; //csrfToken is global
XMLHttpRequest.prototype.send = function(data) {
this.setRequestHeader('X-CSRF-Token', token);
return send.apply(this, arguments);
};
}());
From now, every request will have csrf in the header. This solved the problem for us. Hope this helps you too.

Related

Terraform plan with 1Password provider fails with rpc error unavailable desc transport is closing

After adding some new secrets to Terraform using the 1Password provider, we saw an error without much helpful output.
$ terraform plan
...
Error: rpc error: code = Unavailable desc = transport is closing
Error: rpc error: code = Canceled desc = context canceled
...
Terraform provider:
terraform {
required_providers {
onepassword = {
source = "anasinnyk/onepassword"
version = "~> 1.2.1"
}
}
required_version = "~> 0.13"
}
Terraform yml:
data "onepassword_item_password" "search_cloud_id" {
name = "Azure Elastic Cloud ID"
vault = data.onepassword_vault.vault_name.id
}
data "onepassword_item_password" "search_api_key" {
name = "Azure Elastic Cloud API key"
vault = data.onepassword_vault.vault_name.id
}
resource "kubernetes_secret" "search" {
metadata {
name = "search"
namespace = kubernetes_namespace.production.id
}
data = {
"ELASTICSEARCH_CLOUD_ID" = data.onepassword_item_password.api_search_cloud_id.password
"ELASTICSEARCH_API_KEY" = data.onepassword_item_password.api_search_api_key.password
}
type = "Opaque"
}
We managed to get some useful output by removing one data reference at a time, which lead to the errors printing:
panic: runtime error: invalid memory address or nil pointer dereference
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x147d1bd]
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1:
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: goroutine 194 [running]:
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/anasinnyk/terraform-provider-1password/onepassword.resourceItemPasswordRead(0x19418a0, 0xc0004ac540, 0xc000096f80, 0x173d040, 0xc0007ac740, 0xc0003bce40, 0xc000119910, 0x100c9b8)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/anasinnyk/terraform-provider-1password/onepassword/resource_item_password.go:75 +0x18d
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc0004613f0, 0x1941820, 0xc000384300, 0xc000096f80, 0x173d040, 0xc0007ac740, 0x0, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/helper/schema/resource.go:288 +0x1ec
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc0004613f0, 0x1941820, 0xc000384300, 0xc000304b80, 0x173d040, 0xc0007ac740, 0xc0007ac740, 0xc000304b80, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/helper/schema/resource.go:489 +0xff
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/plugin.(*GRPCProviderServer).ReadDataSource(0xc00026e6a0, 0x1941820, 0xc000384300, 0xc0003842c0, 0xc00026e6a0, 0xc00026e6b0, 0x185a058)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/helper/plugin/grpc_provider.go:1102 +0x4c5
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ReadDataSource_Handler.func1(0x1941820, 0xc000384300, 0x17dcd60, 0xc0003842c0, 0xc000384300, 0x1773c80, 0xc0004ac401, 0xc000304640)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/tfplugin5/tfplugin5.pb.go:3348 +0x86
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/plugin.Serve.func3.1(0x19418e0, 0xc0003d4480, 0x17dcd60, 0xc0003842c0, 0xc000304620, 0xc000304640, 0xc0007c8ba0, 0x11b81c8, 0x17c7a20, 0xc0003d4480)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/plugin/serve.go:76 +0x87
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ReadDataSource_Handler(0x17fdb60, 0xc00026e6a0, 0x19418e0, 0xc0003d4480, 0xc0004ac4e0, 0xc00000d080, 0x19418e0, 0xc0003d4480, 0xc000010090, 0x90)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/tfplugin5/tfplugin5.pb.go:3350 +0x14b
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).processUnaryRPC(0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000, 0xc00020acf0, 0x1e49910, 0x0, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:1171 +0x50a
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).handleStream(0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:1494 +0xccd
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0000382e0, 0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:834 +0xa1
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:832 +0x204
2021-08-27T15:34:29.368+0930 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-08-27T15:34:29.369+0930 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/anasinnyk/onepassword/1.2.1/darwin_amd64/terraform-provider-onepassword_v1.2.1 pid=17549 error="exit status 2"
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [TRACE] [walkRefresh] Exiting eval tree: data.onepassword_item_password.search_api_key
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": visit complete
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": dynamic subgraph encountered errors
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": visit complete
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key (expand)": dynamic subgraph encountered errors
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key (expand)": visit complete
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/kubernetes\"] (close)" errored, so skipping
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/anasinnyk/onepassword\"] (close)" errored, so skipping
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-08-27T15:34:29.501+0930 [DEBUG] plugin: plugin exited
2021-08-27T15:34:29.502+0930 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-08-27T15:34:29.507+0930 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/kubernetes/1.13.3/darwin_amd64/terraform-provider-kubernetes_v1.13.3_x4 pid=17673
2021-08-27T15:34:29.507+0930 [DEBUG] plugin: plugin exited
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: https://github.com/hashicorp/terraform/issues
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
This led us to find that one of our team members managed to create two 1Password entries with the same name in the same vault.
After deleting the duplicate entry in 1Password, terraform plan ran without error again.

nginx task completion handler cannot respond after ngx_thread_task_post in a body filter

I'm developing a nginx (1.19.0) body filter module with multi-threading enabled (--with-threads Enables NGINX to use thread pools. For details, see Thread Pools in NGINX Boost Performance 9x! on the NGINX blog.), which aims to save the acess_token in the response from the upstream server.
I referred to the development guide - Threads, How to make Nginx wait for a thread pool task and nginx HTTP module with Thread Pools and Tasks,
My code snippet is as following:
typedef struct {
int status;
cJSON *oauth2_rsp;
ngx_http_request_t *req;
ngx_chain_t *chain;
} redis_thread_ctx_t;
/* This function is executed in a separate thread */
static void redis_thread_func(void *data, ngx_log_t *log) {
ngx_logd("SAM_DEBUG: redis_thread_func");
redis_thread_ctx_t *ctx = data;
cJSON *oauth2_access_token = cJSON_GetObjectItemCaseSensitive(ctx->oauth2_rsp, OAUTH2_PARAM_NAME_ACCESS_TOKEN);
cJSON *oauth2_token_type = cJSON_GetObjectItemCaseSensitive(ctx->oauth2_rsp, OAUTH2_PARAM_NAME_TOKEN_TYPE);
cJSON *oauth2_expires_in = cJSON_GetObjectItemCaseSensitive(ctx->oauth2_rsp, OAUTH2_PARAM_NAME_EXPIRES_IN);
if (0 == cache_token(ctx->req, oauth2_access_token->valuestring,
cJSON_IsString(oauth2_token_type) ? oauth2_token_type->valuestring : "Bear",
cJSON_IsNumber(oauth2_expires_in) ? oauth2_expires_in->valueint : 3600)) {
ctx->status = NGX_HTTP_OK;
} else {
ngx_log_error(NGX_LOG_ERR, log, 0, "cache_token failed");
}
ngx_logd("SAM_DEBUG: after cache_token");
cJSON_free(ctx->oauth2_rsp);
ngx_logd("SAM_DEBUG: after cJSON_free");
}
/*
* The task completion handler executes on the main event loop, and is pretty straightforward: Mark the background
* processing complete, and call the nginx HTTP function to resume processing of the request.
*/
static void redis_thread_completion(ngx_event_t *ev) {
redis_thread_ctx_t *ctx = ev->data;
ngx_http_request_t *req = ctx->req;
ngx_connection_t *con = req->connection;
ngx_log_t *log = con->log;
ngx_http_set_log_request(log, req);
ngx_logd("SAM_DEBUG: redis_thread_completion: \"%V?%V\"", &req->uri, &req->args);
req->main->blocked--;
req->aio = 0;
//ngx_http_handler(req);
ngx_http_next_body_filter(req, ctx->chain);
//ngx_http_finalize_request(req, NGX_DONE);
ngx_logd("SAM_DEBUG: after ngx_http_next_body_filter");
}
//https://serverfault.com/questions/480352/modify-data-being-proxied-by-nginx-on-the-fly
static ngx_int_t ngx_http_pep_body_filter(ngx_http_request_t *req, ngx_chain_t *chain) {
// ... omitted for brevity
cJSON *oauth2_rsp_json = NULL;
//#if (NGX_THREADS)
ngx_thread_task_t *task = ngx_thread_task_alloc(req->pool, sizeof(redis_thread_ctx_t));
if (NULL == task) {
return NGX_ERROR;
}
ngx_logd("SAM_DEBUG: after ngx_thread_task_alloc");
redis_thread_ctx_t *redis_ctx = task->ctx;
redis_ctx->status = NGX_HTTP_BAD_GATEWAY;
redis_ctx->req = req;
redis_ctx->oauth2_rsp = oauth2_rsp_json;
redis_ctx->chain = chain;
task->handler = redis_thread_func;
task->event.handler = redis_thread_completion;
task->event.data = redis_ctx;
ngx_http_core_loc_conf_t *clcf = ngx_http_get_module_loc_conf(req, ngx_http_core_module);
//subrequests=51, count=1, blocked=1, aio=0
ngx_logd("SAM_DEBUG: subrequests=%d, count=%d, blocked=%d, aio=%d", req->subrequests, req->count, req->blocked, req->aio);
if (NGX_OK != ngx_thread_task_post(clcf->thread_pool, task)) {
req->main->blocked--;
cJSON_free(oauth2_rsp_json);
ngx_log_error(NGX_LOG_ERR, log, 0, "ngx_thread_task_post failed");
return NGX_ERROR; //NGX_HTTP_INTERNAL_SERVER_ERROR
}
//Note: increment `req->main->blocked` so nginx won't finalize request (req)
req->main->blocked++;
req->aio = 1;
ngx_logd("SAM_DEBUG: after ngx_thread_task_post");
//#else
#if defined(USE_REDIS_TO_CACHE_TOKEN) && (NGX_THREADS)
return NGX_OK; //NGX_AGAIN
#else
return ngx_http_next_body_filter ? ngx_http_next_body_filter(req, chain) : NGX_OK;
#endif
}
After a test, unfortunately, I found the client could not receive the reponse. Postman showed me "Error: socket hang up" or there's no corresponding HTTP response paket in wireshak. In addition, error.log was as following,
2020/07/20 18:38:55 [debug] 461#461: *3 redis_thread_completion|772|SAM_DEBUG: after ngx_http_next_body_filter
2020/07/20 18:38:55 [debug] 461#461: timer delta: 3
2020/07/20 18:38:55 [debug] 461#461: worker cycle
2020/07/20 18:38:55 [debug] 461#461: epoll timer: 59997
2020/07/20 18:39:55 [debug] 461#461: timer delta: 59998
2020/07/20 18:39:55 [debug] 461#461: *3 event timer del: 3: 18416868
2020/07/20 18:39:55 [debug] 461#461: *3 http empty handler
2020/07/20 18:39:55 [debug] 461#461: worker cycle
2020/07/20 18:39:55 [debug] 461#461: epoll timer: 5002
2020/07/20 18:40:00 [debug] 461#461: timer delta: 5003
2020/07/20 18:40:00 [debug] 461#461: *3 event timer del: 3: 18421871
2020/07/20 18:40:00 [debug] 461#461: *3 http keepalive handler
2020/07/20 18:40:00 [debug] 461#461: *3 close http connection: 3
2020/07/20 18:40:00 [debug] 461#461: *3 reusable connection: 0
2020/07/20 18:40:00 [debug] 461#461: *3 free: 0000000000000000
2020/07/20 18:40:00 [debug] 461#461: *3 free: 00007FFFEC420BF0, unused: 136
2020/07/20 18:40:00 [debug] 461#461: worker cycle
2020/07/20 18:40:00 [debug] 461#461: epoll timer: -1
Where I have gone wrong?
After reading the code of static void ngx_http_upstream_thread_event_handler(ngx_event_t *ev) in nginx\src\http\ngx_http_upstream.c, I changed ngx_http_next_body_filter(req, ctx->chain); to req->write_event_handler(req); in static void redis_thread_completion(ngx_event_t *ev) and
static ngx_int_t ngx_http_pep_body_filter(ngx_http_request_t *req, ngx_chain_t *chain) {
// ... omitted for brevity
return ngx_http_next_body_filter(req, chain);
}
As a result, nginx could send the HTTP response to the client.

502 Bad Gateway when using Let's Encrypt with Nginx and Node.js on new domain

I have recently set up a new domain on my server, following the steps taken as the previous two (which are working fine on HTTPS). When trying to connect to my new site, however, I receive the following error.log:
2018/10/02 00:54:33 [error] 8536#8536: *1 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 88.98.209.129, server: rees.app, request: "GET / HTTP/2.0", upstream: "https://139.59.178.110:3500/", host: "rees.app"
When running debug on this error, I receive:
2018/10/02 00:57:14 [debug] 8687#8687: epoll add event: fd:6 op:1 ev:00002001
2018/10/02 00:57:14 [debug] 8687#8687: epoll add event: fd:7 op:1 ev:00002001
2018/10/02 00:57:14 [debug] 8687#8687: epoll add event: fd:8 op:1 ev:00002001
2018/10/02 00:57:16 [debug] 8687#8687: accept on 0.0.0.0:443, ready: 0
2018/10/02 00:57:16 [debug] 8687#8687: posix_memalign: 000055C3AF691260:512 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 accept: 88.98.209.129:48510 fd:3
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer add: 3: 60000:1538441896702
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 epoll add event: fd:3 op:1 ev:80002001
2018/10/02 00:57:16 [debug] 8687#8687: *1 http check ssl handshake
2018/10/02 00:57:16 [debug] 8687#8687: *1 http recv(): 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 https ssl handshake: 0x16
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL ALPN supported by client: h2
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL ALPN supported by client: http/1.1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL ALPN selected: h2
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL server name: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_do_handshake: -1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL handshake handler: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_do_handshake: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2018/10/02 00:57:16 [debug] 8687#8687: *1 init http2 connection
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF69F6F0:512 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF63FFA0:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 add cleanup: 000055C3AF6913E0
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF69F900:512 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 send SETTINGS frame ack:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 send WINDOW_UPDATE frame sid:0, window:2147418112
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 read handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: -1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame out: 000055C3AF6400A8 sid:0 bl:0 len:4
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame out: 000055C3AF63FFF0 sid:0 bl:0 len:18
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF632C00:16384
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 27
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 13
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL to write: 40
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_write: 40
2018/10/02 00:57:16 [debug] 8687#8687: *1 tcp_nodelay
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame sent: 000055C3AF63FFF0 sid:0 bl:0 len:18
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame sent: 000055C3AF6400A8 sid:0 bl:0 len:4
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF63FFA0, unused: 3656
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF632C00
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer del: 3: 1538441896702
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer add: 3: 180000:1538442016709
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 idle handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF63FFA0:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 read handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: 64
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: -1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 preface verified
2018/10/02 00:57:16 [debug] 8687#8687: *1 process http2 frame type:4 f:0 l:18 sid:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 send SETTINGS frame ack:1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame complete pos:000055C3AF6FC5C3 end:000055C3AF6FC5D0
2018/10/02 00:57:16 [debug] 8687#8687: *1 process http2 frame type:8 f:0 l:4 sid:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 WINDOW_UPDATE frame sid:0 window:15663105
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame complete pos:000055C3AF6FC5D0 end:000055C3AF6FC5D0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame out: 000055C3AF63FFF0 sid:0 bl:0 len:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF632C00:16384
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL to write: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_write: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame sent: 000055C3AF63FFF0 sid:0 bl:0 len:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF63FFA0, unused: 3855
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF632C00
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer: 3, old: 1538442016709, new: 1538442016709
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 idle handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF63FFA0:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 read handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: 284
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: -1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 process http2 frame type:1 f:25 l:275 sid:1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 HEADERS frame sid:1 on 0 excl:1 weight:256
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF69FB10:1024 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF6A0800:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF6A1810:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header name: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 6
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: ":authority: rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF69FF20:512
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF632C00:4096
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 50 free:4096
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header name: 7
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header name: 4
2018/10/02 00:57:16 [debug] 8687#8687: *1 http uri: "/"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http args: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http exten: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 24
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 7
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "cache-control: max-age=0"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 54 free:4046
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "cache-control: max-age=0"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 18
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack raw string length: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "upgrade-insecure-requests: 1"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 58 free:3992
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "upgrade-insecure-requests: 1"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 58
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 92
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 163 free:3934
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 19
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 64
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 123 free:3771
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 13
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "accept-encoding: gzip, deflate, br"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 64 free:3648
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF6A0130:512 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "accept-encoding: gzip, deflate, br"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 17
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 11
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "accept-language: en,en-GB;q=0.9"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 61 free:3584
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http header: "accept-language: en,en-GB;q=0.9"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 get indexed header: 32
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack encoded string length: 38
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 add header to hpack table: "cookie: __cfduid=dcb5ea66f953195d1aeec15d5d437be411538438395"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 hpack table account: 90 free:3523
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 http request line: "GET / HTTP/2.0"
2018/10/02 00:57:16 [debug] 8687#8687: *1 generic phase: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 rewrite phase: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 test location: "/"
2018/10/02 00:57:16 [debug] 8687#8687: *1 using configuration "/"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http cl:-1 max:1048576
2018/10/02 00:57:16 [debug] 8687#8687: *1 rewrite phase: 3
2018/10/02 00:57:16 [debug] 8687#8687: *1 post rewrite phase: 4
2018/10/02 00:57:16 [debug] 8687#8687: *1 generic phase: 5
2018/10/02 00:57:16 [debug] 8687#8687: *1 generic phase: 6
2018/10/02 00:57:16 [debug] 8687#8687: *1 generic phase: 7
2018/10/02 00:57:16 [debug] 8687#8687: *1 access phase: 8
2018/10/02 00:57:16 [debug] 8687#8687: *1 access phase: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 access phase: 10
2018/10/02 00:57:16 [debug] 8687#8687: *1 post access phase: 11
2018/10/02 00:57:16 [debug] 8687#8687: *1 http body new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http init upstream, client timer: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "Cookie: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "__cfduid=dcb5ea66f953195d1aeec15d5d437be411538438395"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "Host: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Forwarded-For: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "88.98.209.129"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Forwarded-Host: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Forwarded-Server: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Real-IP: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "88.98.209.129"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Forwarded-Proto: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "https"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "X-Original-Request: "
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script var: "/"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: "Connection: close
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http script copy: ""
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header: "cache-control: max-age=0"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header: "upgrade-insecure-requests: 1"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header: "user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header: "accept-language: en,en-GB;q=0.9"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http proxy header:
"GET / HTTP/1.1
Cookie: __cfduid=dcb5ea66f953195d1aeec15d5d437be411538438395
Host: rees.app
X-Forwarded-For: 88.98.209.129
X-Forwarded-Host: rees.app
X-Forwarded-Server: rees.app
X-Real-IP: 88.98.209.129
X-Forwarded-Proto: https
X-Original-Request: /
Connection: close
cache-control: max-age=0
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-language: en,en-GB;q=0.9
"
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF633C10:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http cleanup add: 000055C3AF6A27E8
2018/10/02 00:57:16 [debug] 8687#8687: *1 get rr peer, try: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 stream socket 12
2018/10/02 00:57:16 [debug] 8687#8687: *1 epoll add connection: fd:12 ev:80002005
2018/10/02 00:57:16 [debug] 8687#8687: *1 connect to 139.59.178.110:3500, fd:12 #2
2018/10/02 00:57:16 [debug] 8687#8687: *1 http upstream connect: -2
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF687DF0:128 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer add: 12: 59000:1538441895710
2018/10/02 00:57:16 [debug] 8687#8687: *1 http finalize request: -4, "/?" a:1, c:2
2018/10/02 00:57:16 [debug] 8687#8687: *1 http request count:2 blk:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame complete pos:000055C3AF6FC6AC end:000055C3AF6FC6AC
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer del: 3: 1538442016709
2018/10/02 00:57:16 [debug] 8687#8687: *1 http upstream request: "/?"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http upstream send request handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF69ACC0:72
2018/10/02 00:57:16 [debug] 8687#8687: *1 upstream SSL server name: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 set session: 0000000000000000
2018/10/02 00:57:16 [debug] 8687#8687: *1 tcp_nodelay
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_do_handshake: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 5
2018/10/02 00:57:16 [error] 8687#8687: *1 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 88.98.209.129, server: rees.app, request: "GET / HTTP/2.0", upstream: "https://139.59.178.110:3500/", host: "rees.app"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http next upstream, 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 free rr peer 1 4
2018/10/02 00:57:16 [debug] 8687#8687: *1 finalize http upstream request: 502
2018/10/02 00:57:16 [debug] 8687#8687: *1 finalize http proxy request
2018/10/02 00:57:16 [debug] 8687#8687: *1 close http upstream connection: 12
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF69ACC0
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF687DF0, unused: 32
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer del: 12: 1538441895710
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http finalize request: 502, "/?" a:1, c:1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http special response: 502, "/?"
2018/10/02 00:57:16 [debug] 8687#8687: *1 xslt filter header
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 header filter
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 output header: ":status: 502"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 output header: "server: nginx/1.10.3 (Ubuntu)"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 output header: "date: Tue, 02 Oct 2018 00:57:16 GMT"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 output header: "content-type: text/html"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 output header: "content-length: 584"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2:1 create HEADERS frame 000055C3AF633D50: len:62
2018/10/02 00:57:16 [debug] 8687#8687: *1 http cleanup add: 000055C3AF633E58
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame out: 000055C3AF633D50 sid:1 bl:1 len:62
2018/10/02 00:57:16 [debug] 8687#8687: *1 malloc: 000055C3AF695BD0:16384
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 62
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2:1 HEADERS frame 000055C3AF633D50 was sent
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame sent: 000055C3AF633D50 sid:1 bl:1 len:62
2018/10/02 00:57:16 [debug] 8687#8687: *1 http output filter "/?"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http copy filter: "/?"
2018/10/02 00:57:16 [debug] 8687#8687: *1 image filter
2018/10/02 00:57:16 [debug] 8687#8687: *1 xslt filter body
2018/10/02 00:57:16 [debug] 8687#8687: *1 http postpone filter "/?" 000055C3AF633FD8
2018/10/02 00:57:16 [debug] 8687#8687: *1 write new buf t:0 f:0 0000000000000000, pos 000055C3AE362AC0, size: 120 file: 0, size: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 write new buf t:0 f:0 0000000000000000, pos 000055C3AE363E40, size: 62 file: 0, size: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 write new buf t:0 f:0 0000000000000000, pos 000055C3AE363C60, size: 402 file: 0, size: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http write filter: l:1 f:0 s:584
2018/10/02 00:57:16 [debug] 8687#8687: *1 http write filter limit 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2:1 create DATA frame 000055C3AF633D50: len:584 flags:1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame out: 000055C3AF633D50 sid:1 bl:0 len:584
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 120
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 62
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL buf copy: 402
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL to write: 664
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_write: 664
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2:1 DATA frame 000055C3AF633D50 was sent
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame sent: 000055C3AF633D50 sid:1 bl:0 len:584
2018/10/02 00:57:16 [debug] 8687#8687: *1 http write filter 0000000000000000
2018/10/02 00:57:16 [debug] 8687#8687: *1 http copy filter: 0 "/?"
2018/10/02 00:57:16 [debug] 8687#8687: *1 http finalize request: 0, "/?" a:1, c:1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http request count:1 blk:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 close stream 1, queued 0, processing 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 http close request
2018/10/02 00:57:16 [debug] 8687#8687: *1 http log handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF6A0800, unused: 8
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF6A1810, unused: 7
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF633C10, unused: 2750
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF69FB10, unused: 540
2018/10/02 00:57:16 [debug] 8687#8687: *1 post event 000055C3AF6D86F0
2018/10/02 00:57:16 [debug] 8687#8687: *1 delete posted event 000055C3AF6D86F0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 handle connection handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF63FFA0, unused: 3488
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF695BD0
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer add: 3: 180000:1538442016711
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 idle handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 0
2018/10/02 00:57:16 [debug] 8687#8687: *1 posix_memalign: 000055C3AF63FFA0:4096 #16
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 read handler
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: 9
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_read: -1
2018/10/02 00:57:16 [debug] 8687#8687: *1 SSL_get_error: 2
2018/10/02 00:57:16 [debug] 8687#8687: *1 process http2 frame type:4 f:1 l:0 sid:0
2018/10/02 00:57:16 [debug] 8687#8687: *1 http2 frame complete pos:000055C3AF6FC599 end:000055C3AF6FC599
2018/10/02 00:57:16 [debug] 8687#8687: *1 free: 000055C3AF63FFA0, unused: 4016
2018/10/02 00:57:16 [debug] 8687#8687: *1 reusable connection: 1
2018/10/02 00:57:16 [debug] 8687#8687: *1 event timer: 3, old: 1538442016711, new: 1538442016713
I am not very familiar with Nginx or any sysadmin, so this log does not make much sense to me. I have tried deleting and re-creating the certificate files for the website, though this also has not worked. There are two other websites on the host which work perfectly fine, so I'm just unsure as to what is going on here.
This is the server block file:
# Remove WWW from HTTP
server {
listen 80;
server_name www.rees.app rees.app;
return 301 https://rees.app$request_uri;
}
# Remove WWW from HTTPS
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/rees.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rees.app/privkey.pem;
server_name www.rees.app;
return 301 https://rees.app$request_uri;
}
# HTTPS request
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rees.app;
ssl_certificate /etc/letsencrypt/live/rees.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rees.app/privkey.pem;
location / {
proxy_pass https://127.0.0.1:3500;
include /etc/nginx/proxy_params;
}
}
And this is the /etc/nginx/proxy_params file:
proxy_buffers 16 32k;
proxy_buffer_size 64k;
proxy_busy_buffers_size 128k;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_connect_timeout 59s;
proxy_hide_header X-Powered-By;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
proxy_no_cache $http_pragma $http_authorization;
proxy_pass_header Set-Cookie;
proxy_read_timeout 600;
proxy_redirect off;
proxy_send_timeout 600;
proxy_temp_file_write_size 64k;
proxy_set_header Accept-Encoding '';
proxy_set_header Cookie $http_cookie;
proxy_set_header Host $host;
proxy_set_header Proxy '';
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-Request $request_uri;
proxy_ssl_server_name on;
The domain is also set up to use CloudFlare (as with the others), though CloudFlare is 'paused' meaning that it is only used for DNS routing to the server. The server block is also symlinked to sites-active and, contrary to the error shown, the site does actually have a valid HTTPS certificate if you visit it. I am unsure what could be causing the 502.
Managed to resolve this issue by re-issuing the certificates again. Whilst I had tried this the first time, I did not realise that it is important to remove (to my knowledge the only) three folders from the directories /accounts, /live, and /renewal. I had not done this which is why my renewals did not work.
The best way to ensure that your certificate is correct is to make sure that /live contains the certificate, and that /renewal contains a .conf that is not empty. If any of these are the case then it is likely that the certificate did not deploy correctly and you should take a look to see if you have missed anything.

node silently closes requests with literal space in url

Let's start simple server:
var http = require('http');
http.createServer(function (req, res) {
console.log('asdasd');
res.end('asdasd');
}).listen(8898)
And make a simple request
curl -v 'localhost:8898/?ab'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?ab HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 13 Oct 2016 20:26:14 GMT
< Connection: keep-alive
< Content-Length: 6
<
* Connection #0 to host localhost left intact
asdasd
Looks like everything is all right.
But if we add a literal space to it...
cornholio-osx:~/>curl -v 'localhost:8898/?a b'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?a b HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
Nothing is logged and no body is written.
I assume, literal spaces in URLs are violation of HTTP protocol but is this behavior HTTP-complaint?

Getting random "http first read error: EOF" errors in varnish

I'm seeing the following 503 error in varnish from time to time in the logs:
* << BeReq >> 213585014
- Begin bereq 213585013 fetch
- Timestamp Start: 1452675822.032332 0.000000 0.000000
- BereqMethod GET
- BereqURL /client/hedge-funds-asset-managers/
- BereqProtocol HTTP/1.1
- BereqHeader X-Real-IP: 123.125.71.28
- BereqHeader Host: XXXXXXXXXXXXXXXXXXX
- BereqHeader X-Forwarded-Proto: http
- BereqHeader User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)
- BereqHeader Accept-Encoding: gzip
- BereqHeader Accept-Language: zh-cn,zh-tw
- BereqHeader Accept: */*
- BereqHeader X-Forwarded-For: 172.18.210.22
- BereqHeader X-Varnish: 213585014
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 232 reload_2016-01-12T07:28:50.cp_12 162.251.80.23 80 172.18.210.71 40019
- Timestamp Bereq: 1452675822.047840 0.015508 0.015508
- FetchError http first read error: EOF
- BackendClose 232 reload_2016-01-12T07:28:50.cp_12
- Timestamp Beresp: 1452675876.038544 54.006212 53.990704
- Timestamp Error: 1452675876.038555 54.006223 0.000010
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
- BerespHeader Content-Type: text/html; charset=utf-8
- BerespHeader Retry-After: 5
- VCL_return deliver
- Storage malloc Transient
- ObjProtocol HTTP/1.1
- ObjStatus 503
- ObjReason Backend fetch failed
- ObjHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- ObjHeader Server: Varnish
- ObjHeader Content-Type: text/html; charset=utf-8
- ObjHeader Retry-After: 5
- Length 286
- BereqAcct 350 0 350 0 0 0
- End
The issue is not with the backend connection because a curl to the same URL from the varnish server works fine. The version of varnish is 4.1.0. I'm not sure what "http first read error: EOF" means and any light on this issue is appreciated. Due to the random nature of this issue, I do not have any way to reproduce it as well.
A "first read error" happens in Varnish when you try to read headers from the backend before calling vcl_fetch, and Varnish failed to get a response. TL;DR: your backend is either closing the connection before delivering a response, or it is timing out delivering the response. You could use a tool like wireshark to determine which of the two is happening.
To understand what goes on, let's do some source diving:
static int __match_proto__(vdi_gethdrs_f)
vbe_dir_gethdrs(const struct director *d, struct worker *wrk,
struct busyobj *bo)
{
int i, extrachance = 1;
struct backend *bp;
struct vbc *vbc;
...
do {
vbc = vbe_dir_getfd(wrk, bp, bo);
Not getting too much into directors, vbe_dir_gethdrs is called after Varnish has either opened a new connection, or decided it is going to reuse a connection.
if (vbc->state != VBC_STATE_STOLEN)
extrachance = 0;
If we reuse a connection, vbc->state is set to VBC_STATE_STOLEN (Varnish-Cache/bin/varnishd/cache/cache_backend_tcp.c line 364). When we've opened a new connection, this value is not set. So far, so good.
i = V1F_SendReq(wrk, bo, &bo->acct.bereq_hdrbytes, 0);
if (vbc->state != VBC_STATE_USED)
VBT_Wait(wrk, vbc);
assert(vbc->state == VBC_STATE_USED);
if (i == 0)
i = V1F_FetchRespHdr(bo);
What this does is sends the request to the backend. If everything there goes well, we then call V1F_FetchRespHdr, which waits for the origin to send its protocol response and headers. If we follow the code into V1F_FetchRespHdr:
VTCP_set_read_timeout(htc->fd, htc->first_byte_timeout);
...
do {
...
i = read(htc->fd, htc->rxbuf_e, i);
if (i <= 0) {
bo->acct.beresp_hdrbytes +=
htc->rxbuf_e - htc->rxbuf_b;
WS_ReleaseP(htc->ws, htc->rxbuf_b);
VSLb(bo->vsl, SLT_FetchError, "http %sread error: EOF",
first ? "first " : "");
htc->doclose = SC_RX_TIMEOUT;
return (first ? 1 : -1);
}
Here, we see that we're setting a timeout on the socket before we do the read syscall. If this read returns an error (the < 0 case), or EOF (the == 0 case), and this is the first time we have called read, we end up logging http first read error: EOF as you are seeing in your varnishlog output.
So, if you open a new connection to the backend, and the backend times out or closes the connection after the request was sent, you get this error.
Personally, I would find it suspect if your origin was closing connections; I think timeouts are usually more likely. But connections may be closed if your backend thinks it has too many open connections, or perhaps it has received too many requests over the connection, or something like this.
Hope that helps!

Resources