aws-serverless-express connection error - EPIPE - node.js

We have a NodeJS 10.16.3 Express API. We've recently switched from AWS Elastic Beanstalk/EC2 to Lambda / Serverless. Our DB is Postgres (PostgreSQL) 12.2.
It seemed all of a sudden I started getting this error on my local server when making requests from the client:
offline: ANY /dev/inventory/inventory (λ: app)
ERROR: aws-serverless-express connection error
{ Error: write EPIPE
at WriteWrap.afterWrite (net.js:788:14) errno: 'EPIPE', code: 'EPIPE', syscall: 'write' }
offline: (λ: app) RequestId: ckazracm0001emds69068drtu Duration: 2.58 ms Billed Duration: 100 ms
I can't seem to find much on this issue and I'm hoping someone can help.
Notes:
My local postgres is running
My .env.json file is correct

Found the error. Hopefully this will be helpful for others.
The error was due to exceeding the max cookie size of 4096 bytes in the application request headers. We solved it by stripping out erroneous cookies that were passed through some 3rd party services we were using, which has long encoded strings for cookie values.

Related

ECONNRESET - Does not gracefully throw error, but crashes web app

We have a NodeJS app running as a Azure Web App Service on a Linux based App Service Plan. (configured to be running as always on).
Setup:
NodeJS 16
App Service Plan (Linux)
Redis (Azure managed hosted service)
Application Insights (Azure managed hosted service)
Packages:
express 4.17.2
dotenv 14.2.0
redis 4.0.2
applicationinsights 2.2.0
The web service does basic data calculations returning a result as a REST API service. Redis is used to store previously calculated results.
Application Insights has been enabled on the App Service level in the portal.
For additional fault monitoring, we added the NPM Package applicationinsights version 2.2.0 in code.
Application insights is configured at start up of the app using:
const appInsights = require("applicationinsights");
appInsights.setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING)
appInsights.start()
The app service runs for some time but then crashes unexpectedly with the following in the KUDU logs:
2022-01-20T00:41:19.028838008Z events.js:377
2022-01-20T00:41:19.029056811Z throw er; // Unhandled 'error' event
2022-01-20T00:41:19.029073211Z ^
2022-01-20T00:41:19.029079111Z
2022-01-20T00:41:19.029084211Z SocketClosedUnexpectedlyError: Socket closed unexpectedly
2022-01-20T00:41:19.029089512Z at TLSSocket.<anonymous> (/home/site/wwwroot/node_modules/#node-redis/client/dist/lib/client/socket.js:184:118)
2022-01-20T00:41:19.029095412Z at Object.onceWrapper (events.js:520:26)
2022-01-20T00:41:19.029100512Z at TLSSocket.emit (events.js:412:35)
2022-01-20T00:41:19.029105412Z at net.js:675:12
2022-01-20T00:41:19.029110212Z at TCP.done (_tls_wrap.js:563:7)
2022-01-20T00:41:19.029115112Z Emitted 'error' event on Commander instance at:
2022-01-20T00:41:19.029128012Z at RedisSocket.<anonymous> (/home/site/wwwroot/node_modules/#node-redis/client/dist/lib/client/index.js:338:14)
2022-01-20T00:41:19.029149012Z at RedisSocket.emit (events.js:400:28)
2022-01-20T00:41:19.029154512Z at RedisSocket._RedisSocket_onSocketError (/home/site/wwwroot/node_modules/#node-redis/client/dist/lib/client/socket.js:207:10)
2022-01-20T00:41:19.029159212Z at TLSSocket.<anonymous> (/home/site/wwwroot/node_modules/#node-redis/client/dist/lib/client/socket.js:184:107)
2022-01-20T00:41:19.029164013Z at Object.onceWrapper (events.js:520:26)
2022-01-20T00:41:19.029168413Z [... lines matching original stack trace ...]
2022-01-20T00:41:19.029172813Z at TCP.done (_tls_wrap.js:563:7)
I then removed the use of Redis to test the scenario without a external connection, but after some time running, the application still crashes without triggering try/catch code.
I was able to trace the following debug information:
arg0:OperationalError {cause: Error: read ECONNRESET
at TCP.onStreamRead…nternal/stream_base_commons:220:20)
at TC…, isOperational: true, errno: -4077, code: 'ECONNRESET', syscall: 'read', …}
cause:Error: read ECONNRESET\n at TCP.onStreamRead (node:internal/stream_base_commons:220:20)\n at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {errno: -4077, code: 'ECONNRESET', syscall: 'read', stack: 'Error: read ECONNRESET\n at TCP.onStreamRea…Trampoline (node:internal/async_hooks:130:17)', message: 'read ECONNRESET'}
code:'ECONNRESET'
errno:-4077
isOperational:true
syscall:'read'
message:'read ECONNRESET'
name:'Error'
stack:'Error: read ECONNRESET\n at TCP.onStreamRead (node:internal/stream_base_commons:220:20)\n at TCP.callbackTrampoline (node:internal/async_hooks:130:17)'
My local debug console points me to the file: /node_modules\diagnostic-channel-publishers\dist\src\console.pub.js:43:39 which as I understand is used to log console log events to Application Insights*.
I then removed Application Insights and the Web App has been running stable without any crashes. I re-enabled the use of Redis and no issues traced thus far. This points me to the issue being Application Insights not being able to gracefully handle a break in TCP Socket connection to the Application Insights Service.
Any way to confirm this or prevent the app to crash?
Error: read ECONNRESET\n at TCP.onStreamRead (node:internal/stream_base_commons:220:20)\n at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {errno: -4077, code: 'ECONNRESET', syscall: 'read', stack: 'Error: read ECONNRESET\n at TCP.onStreamRea…Trampoline (node:internal/async_hooks:130:17)', message: 'read ECONNRESET'}
"ECONNRESET" is commonly thrown when the other end of a TCP connection closes its end because of any protocol-related issues and since no one is listening to the 'error' event it gets thrown. To cope with it, you need set up a listener that can handle such an erroneous condition.
Application Insights not being able to gracefully handle a break in TCP Socket connection
The number of outbound connections that can be made is limited. The maximum number of outbound connections is determined by the size of the worker used.
For more information, please refer this MSFT documentation

Nodejs Error "EPROTO" when using GitHub Webhook to forward to Jenkins using dockerimage

I'm using a jenkins server behind a firewall. I used smee-client smee.io to get the webhooks from GitHub through the firewall.
I used the dockerimage from deltaprojects/smee-client. It is running and connects to smee.io/xyz to get the webhooks. But if GitHub sending a webhook (configured sending it to smee.io/xyz) it was successfull with a 200 Response.
But the smee-client ist throwing some EPROTO Errors from nodejs. (see output below)
Config Github webhook:
Payload url https://smee.io/xyz
Content type application/json
Enable SSL verification
* Send me everything
[*] active
Webhooks seems to work and get a 200 HTML Response
The smee-client is showing the following Error:
{ Error: write EPROTO 140483050982248:error:1408F10B:SSL
routines:ssl3_get_record:wrong version
number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
at WriteWrap.afterWrite [as oncomplete] (net.js:788:14)
errno: 'EPROTO',
code: 'EPROTO',
syscall: 'write',
response: undefined }
{ Error: write EPROTO 140483050982248:error:1408F10B:SSL
routines:ssl3_get_record:wrong version
number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
at WriteWrap.afterWrite [as oncomplete] (net.js:788:14)
errno: 'EPROTO',
code: 'EPROTO',
syscall: 'write',
response: undefined }
I tried to build the image myself but with the same result in Error massage.
I'm not that fammiliar with ssl certificates or even if this problem is related to ssl.
Maybe someone faced this problem as well and know a hint what i'm doing wrong? That would be really nice
Got solved by forwarding from smee-client to jenkins with http:// instead of https://
This Error message was kind of misleading

Dialogflow NodeJS library issue

I'm working on a dialogflow POC where I'm trying to invoke the v2 API provided by DialogFlow. While trying the NodeJS code example provided here, I'm getting the below error
{ Error: EHOSTUNREACH undefined: Getting metadata from plugin failed with error: request to https://www.googleapis.com/oauth2/v4/token failed, reason: connect EHOSTUNREACH 0.0.38.172:80 - Local (192.168.0.103:51468)
at Object.callErrorFromStatus (/Users/devuser/Development/workspaces/df-poc/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.call.on (/Users/devuser/Development/workspaces/df-poc/node_modules/#grpc/grpc-js/build/src/client.js:96:33)
at Http2CallStream.emit (events.js:203:15)
at process.nextTick (/Users/devuser/Development/workspaces/df-poc/node_modules/#grpc/grpc-js/build/src/call-stream.js:75:22)
at process._tickCallback (internal/process/next_tick.js:61:11)
code: 'EHOSTUNREACH',
details:
'Getting metadata from plugin failed with error: request to https://www.googleapis.com/oauth2/v4/token failed, reason: connect EHOSTUNREACH 0.0.38.172:80 - Local (192.168.0.103:51468)',
metadata: Metadata { internalRepr: Map {}, options: {} } }
I have imported GOOGLE_APPLICATION_CREDENTIALS and point the environment variable to the credentials. The invocation works fine if I try the REST API route with the Authorization header.
Kindly let me know if there is something which I'm missing here.
This means EHOST (the remote host of the files you are requesting) are unreachable. They are either down, or your computer cannot access them due to some other restriction, such as location, which can be solved with a VPN.

Error: spawn EACCES on AWS Lambda using html-to-pdf package

I'm using html-pdf and trying to convert html to pdf on AWS Lambda using node js, but I get the error Error: spawn EACCES" message:
START RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Version: $LATEST
2019-06-07T20:44:44.824Z 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 ************** start
2019-06-07T20:44:45.025Z 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Error: spawn EACCES
at _errnoException (util.js:1022:11)
at ChildProcess.spawn (internal/child_process.js:323:11)
at Object.exports.spawn (child_process.js:502:9)
at PDF.PdfExec [as exec] (/var/task/node_modules/html-pdf/lib/pdf.js:87:28)
at PDF.PdfToBuffer [as toBuffer] (/var/task/node_modules/html-pdf/lib/pdf.js:44:8)
at exports.handler (/var/task/index.js:17:35)
END RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46
REPORT RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Duration: 345.46 ms Billed Duration: 400 ms Memory Size: 128 MB Max Memory Used: 39 MB
RequestId: 8bc188e7-8249-41d7-b8f7-8a2585ea8e46 Process exited before completing request
Couple ideas:
How do you package and upload your code?
lambda requires the files to have read access for all users, particularly "other", if this is missing you will receive a non-obvious error when trying to call the function. The fix is simple enough, perform a 'chmod a+r *' before creating your zip file. If the code is visible in the inline editor adding an empty line and saving will also fix the problem, presumably by overwriting the file with the correct permissions.
Where are you saving the converted file / are you using lambda tmp directory? Might be a wrong path
Lambda timeout doesn't allow enough time to execute your function. Less possible, but due to mentioning process exited before completion I would double check. Check timeout settings on your function

how to push data into mongodb using sparkfun phant?

I am new to phant and i cannot find a suitable documentation on phant using mongodb. because i have lots of data and it memory overflow occurs. and finally i fell into following error:
HTTP output: { [Error: EMFILE, open 'phant_streams/4d16/83403f7611e5810d57f88174fbef/stream.csv']
errno: -24,
code: 'EMFILE',
path: 'phant_streams/4d16/83403f7611e5810d57f88174fbef/stream.csv' }
events.js:87
throw Error('Uncaught, unspecified "error" event.');
^
Error: Uncaught, unspecified "error" event.
at Error (native)
at Function.emit (events.js:87:13)
at Function.<anonymous> (/usr/lib/node_modules/phant/node_modules/phant-manager-http/index.js:237:12)
at PhantMeta.<anonymous> (/usr/lib/node_modules/phant/node_modules/phant-meta-nedb/lib/phant-meta-nedb.js:243:14)
at callback (/usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/executor.js:30:17)
at /usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/datastore.js:536:25
at /usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/persistence.js:201:12
at fs.js:1077:21
at FSReqWrap.oncomplete (fs.js:95:15)
except this sometimes following error also occurs:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
thats why i want to use mongodb to prevent this error. i searched about this and finally found sparckfun library for mongodb:
https://github.com/sparkfun/phant-stream-mongodb
i installed this but nothing happened as data still not string into mongo.
so, How will i store phant data into mongodb ?
I had the same problem, specifically trying to deploy my own Phant instance on Heroku (since I wanted to circumvent Sparkfun's 50Mb limit). After some dabbling with versions of the mongodb and mongoose libraries, I successfully forked and modified their repository so that you can either run it locally or directly deploy on heroku (just make sure you provision a MongoLab add-on). Check out my fork here: https://github.com/davidlago/phant
Hope this helps!

Resources