Nodejs busboy is not a constructor - node.js

I just containerised a nodejs based serverside app and getting this error below any assistance will be greatly appreciated. i had this same app running on ec2 instance without any issue at all
0|src | GET /.env 404 1.499 ms - 43
10|src | 2022-03-23T18:05:11.118Z - error: \*\* Error: Status: 500 Message: Busboy is not a constructor
10|src | POST / 500 2.442 ms - 27
11|src | GET /ping 200 1.145 ms - 18
13|src | GET /ping 200 2.026 ms - 18
12|src | GET /ping 200 1.166 ms - 18
15|src | GET /ping 200 1.128 ms - 18
14|src | 2022-03-23T18:05:55.508Z - error: \*\* Error: Status: 500 Message: Busboy is not a constructor
14|src | POST /lessons?downloadLanguage=en&localize=false&allowMT=false 500 1.425 ms - 27
9|src | GET /ping 200 1.154 ms - 18
8|src | GET /ping 200 1.128 ms - 18
any time i want to upload a file i get this error

I changed the docker node image version from v10 to v14 and connect-busboy version from v0.0.2 to v1.0.0.0 and it take care of the busboy is not a constructor error for me

Related

Gunicorn access log format not apply

I'm using gunicorn to run a fastapi script, the access log file were created using the gunicorn.conf.py with accesslog yet it will not apply the access_log_format. I tried this apply this example from the github and is still not working
My gunicorn.conf.py
accesslog = '/home/ossbod/chunhueitest/supervisor_log/accesslog.log'
loglevel = 'info'
access_log_format = '%(h)s %(l)s %(t)s "%(r)s" %(s)s %(q)s %(b)s "%(f)s" "%(a)s" %(M)s'
The result I got
<IP>:54668 - "GET /docs HTTP/1.1" 200
<IP>:54668 - "GET /openapi.json HTTP/1.1" 200
<IP>:54668 - "POST /api/v1/add_user HTTP/1.1" 201
How can i get the format to apply to the log?

Get-WinEvent output is truncated

I want to get the records from the Event file but it keeps truncating the messages.
Get-WinEvent -LogName 'Microsoft-AppV-Client/Admin' -MaxEvents 5
TimeCreated Id LevelDisplayName Message
10/21/2021 2:29:20 PM 19102 Error Getting server publishing data failed....
10/21/2021 2:29:20 PM 19203 Error HttpRequest sendRequest failed....
10/21/2021 2:29:05 PM 19102 Error Getting server publishing data failed....
10/21/2021 2:29:05 PM 19203 Error HttpRequest sendRequest failed....
10/21/2021 2:28:50 PM 19102 Error Getting server publishing data failed....

Nginx - Rate limit when origin server response code is 401

I would like nginx to rate limit by user-ip when the origin server responds with a 401 status code. How would I go about this. I already have a limit_req_zone setup for normal API calls which looks something like this: limit_req_zone $binary_remote_addr zone=api:10m rate=5r/s; but I would like to further rate limit offenders that make unauthorized calls to my API end-points.
Edit:
I did try mapping the response status 401 to ip addresses and rate-limit based on the mapped variable but that doesn't seem to do anything. See code below.
map $status $limit {
default '';
401 $binary_remote_addr;
}
limit_req_zone $limit zone=api:10m rate=5r/s;
location /api {
limit_req zone=api burst=5;
...
}
This is quite tricky because the $status variable is empty when declaring the limit_req_zone. The $status is only known after nginx has processed the request. For example after a proxy_pass directive.
The closest I could get to achieve rate limiting by status is doing the following:
...
...
...
limit_req_zone $binary_remote_addr zone=api:10m rate=5r/s;
...
...
...
server {
location /mylocation {
proxy_intercept_errors on;
proxy_pass http://example.org;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 426 428 429 431 451 500 501 502 503 504 505 506 507 508 510 511 #custom_error;
}
location #custom_error {
limit_req zone=api burst=5 nodelay;
return <some_error_code>;
}
}
...
The drawback is that this way you must return a different status code then the proxy pass response.

Getting random "http first read error: EOF" errors in varnish

I'm seeing the following 503 error in varnish from time to time in the logs:
* << BeReq >> 213585014
- Begin bereq 213585013 fetch
- Timestamp Start: 1452675822.032332 0.000000 0.000000
- BereqMethod GET
- BereqURL /client/hedge-funds-asset-managers/
- BereqProtocol HTTP/1.1
- BereqHeader X-Real-IP: 123.125.71.28
- BereqHeader Host: XXXXXXXXXXXXXXXXXXX
- BereqHeader X-Forwarded-Proto: http
- BereqHeader User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)
- BereqHeader Accept-Encoding: gzip
- BereqHeader Accept-Language: zh-cn,zh-tw
- BereqHeader Accept: */*
- BereqHeader X-Forwarded-For: 172.18.210.22
- BereqHeader X-Varnish: 213585014
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 232 reload_2016-01-12T07:28:50.cp_12 162.251.80.23 80 172.18.210.71 40019
- Timestamp Bereq: 1452675822.047840 0.015508 0.015508
- FetchError http first read error: EOF
- BackendClose 232 reload_2016-01-12T07:28:50.cp_12
- Timestamp Beresp: 1452675876.038544 54.006212 53.990704
- Timestamp Error: 1452675876.038555 54.006223 0.000010
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
- BerespHeader Content-Type: text/html; charset=utf-8
- BerespHeader Retry-After: 5
- VCL_return deliver
- Storage malloc Transient
- ObjProtocol HTTP/1.1
- ObjStatus 503
- ObjReason Backend fetch failed
- ObjHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- ObjHeader Server: Varnish
- ObjHeader Content-Type: text/html; charset=utf-8
- ObjHeader Retry-After: 5
- Length 286
- BereqAcct 350 0 350 0 0 0
- End
The issue is not with the backend connection because a curl to the same URL from the varnish server works fine. The version of varnish is 4.1.0. I'm not sure what "http first read error: EOF" means and any light on this issue is appreciated. Due to the random nature of this issue, I do not have any way to reproduce it as well.
A "first read error" happens in Varnish when you try to read headers from the backend before calling vcl_fetch, and Varnish failed to get a response. TL;DR: your backend is either closing the connection before delivering a response, or it is timing out delivering the response. You could use a tool like wireshark to determine which of the two is happening.
To understand what goes on, let's do some source diving:
static int __match_proto__(vdi_gethdrs_f)
vbe_dir_gethdrs(const struct director *d, struct worker *wrk,
struct busyobj *bo)
{
int i, extrachance = 1;
struct backend *bp;
struct vbc *vbc;
...
do {
vbc = vbe_dir_getfd(wrk, bp, bo);
Not getting too much into directors, vbe_dir_gethdrs is called after Varnish has either opened a new connection, or decided it is going to reuse a connection.
if (vbc->state != VBC_STATE_STOLEN)
extrachance = 0;
If we reuse a connection, vbc->state is set to VBC_STATE_STOLEN (Varnish-Cache/bin/varnishd/cache/cache_backend_tcp.c line 364). When we've opened a new connection, this value is not set. So far, so good.
i = V1F_SendReq(wrk, bo, &bo->acct.bereq_hdrbytes, 0);
if (vbc->state != VBC_STATE_USED)
VBT_Wait(wrk, vbc);
assert(vbc->state == VBC_STATE_USED);
if (i == 0)
i = V1F_FetchRespHdr(bo);
What this does is sends the request to the backend. If everything there goes well, we then call V1F_FetchRespHdr, which waits for the origin to send its protocol response and headers. If we follow the code into V1F_FetchRespHdr:
VTCP_set_read_timeout(htc->fd, htc->first_byte_timeout);
...
do {
...
i = read(htc->fd, htc->rxbuf_e, i);
if (i <= 0) {
bo->acct.beresp_hdrbytes +=
htc->rxbuf_e - htc->rxbuf_b;
WS_ReleaseP(htc->ws, htc->rxbuf_b);
VSLb(bo->vsl, SLT_FetchError, "http %sread error: EOF",
first ? "first " : "");
htc->doclose = SC_RX_TIMEOUT;
return (first ? 1 : -1);
}
Here, we see that we're setting a timeout on the socket before we do the read syscall. If this read returns an error (the < 0 case), or EOF (the == 0 case), and this is the first time we have called read, we end up logging http first read error: EOF as you are seeing in your varnishlog output.
So, if you open a new connection to the backend, and the backend times out or closes the connection after the request was sent, you get this error.
Personally, I would find it suspect if your origin was closing connections; I think timeouts are usually more likely. But connections may be closed if your backend thinks it has too many open connections, or perhaps it has received too many requests over the connection, or something like this.
Hope that helps!

Error during couchdb filtered replication with params

I'm trying to run a filtered replication on two different machines, I realized that this only happens when doing a pull replication, if I do a push replication it works fine.
curl -X POST http://localhost:5984/_replicate -d '{\"source\":\"http://MARTIN-NEWPC:5984/pdlib\",\"target\":\"pdlib\",\"filter\":\"replication/SINGLE_COLLECTION\",\"query_params\":{\"key\":\"bb579347-9bfb-4dda-84eb-622b43108872\"}}' -H "Content-Type: application/json"
The cryptic response I get from that request is:
{"error":"json_encode", "reason":"{bad_term, <0.20050.0>}"}
And the debug output in the target couchdb log file is:
[Mon, 17 Oct 2011 01:20:48 GMT] [debug] [<0.476.0>] 'GET' /pdlib/_changes?key=bb579347-9bfb-4dda-84eb-622b43108872&filter=replication/SINGLE_COLLECTION&style=all_docs&heartbeat=10000&since=0&feed=normal {1,
1}
Headers: [{'Accept',"application/json"},
{'Content-Length',"0"},
{'Host',"MARTIN-NEWPC:5984"},
{'User-Agent',"CouchDB/1.0.2"}]
[Mon, 17 Oct 2011 01:20:48 GMT] [debug] [<0.476.0>] OAuth Params: [{"key","bb579347-9bfb-4dda-84eb-622b43108872"},
{"filter","replication/SINGLE_COLLECTION"},
{"style","all_docs"},
{"heartbeat","10000"},
{"since","0"},
{"feed","normal"}]
[Mon, 17 Oct 2011 01:20:48 GMT] [info] [<0.476.0>] 192.168.2.3 - - 'GET' /pdlib/_changes?key=bb579347-9bfb-4dda-84eb-622b43108872&filter=replication/SINGLE_COLLECTION&style=all_docs&heartbeat=10000&since=0&feed=normal 200
[Mon, 17 Oct 2011 01:20:48 GMT] [error] [<0.476.0>] attempted upload of invalid JSON (set log_level to debug to log it)
[Mon, 17 Oct 2011 01:20:48 GMT] [debug] [<0.476.0>] Invalid JSON: <<"bb579347-9bfb-4dda-84eb-622b43108872">>
[Mon, 17 Oct 2011 01:20:48 GMT] [info] [<0.476.0>] 192.168.2.3 - - 'GET' /pdlib/_changes?key=bb579347-9bfb-4dda-84eb-622b43108872&filter=replication/SINGLE_COLLECTION&style=all_docs&heartbeat=10000&since=0&feed=normal 400
[Mon, 17 Oct 2011 01:20:48 GMT] [debug] [<0.476.0>] httpd 400 error response:
{"error":"bad_request","reason":"invalid UTF-8 JSON"}
In case you need to know, this is the filter function:
function (doc, req) {
if (doc.type == 'collection' || doc.type == 'document') {
for (var i in doc.path) {
if (doc.path[i] == req.query.key) {
return true;
}
}
}
return false;
}
Any ideas about the possible cause?
It's common to get a 400 "invalid UTF-8 JSON" error when CouchDB tries to interpret one of your query values as JSON when it's a raw (unquoted) string instead. In this case the replication config results in this HTTP request:
GET /pdlib/_changes?key=bb579347-9bfb-4dda-84eb-622b43108872&filter=replication/SINGLE_COLLECTION&style=all_docs&heartbeat=10000&since=0&feed=normal 400
The _changes feed itself doesn't use a key parameter, but normal CouchDB _view queries do — and there expect it to be a JSON value! — so you might try renaming that query_param to something different.
(Somewhat unfortunately, user-defined filter (and list, etc.) functions share the query parameter namespace with CouchDB itself...you may want to prefix your custom parameters with something that's unlikely to conflict with current or future builtin options, e.g. myapp_key.)
Looks to me like there is something wrong with the way you have your JSON escaped. This works for me:
curl -X POST http://localhost:5984/_replicate -d '{"source":"source_db","target":"target_db","filter":"ddoc/filter-name","query_params":{"key":"some_key"}}' -H "Content-Type: application/json"

Resources