chrome-aws-lambda Amazon linux 2 getting Error: socket hang up - node.js

Background:
I have this code working in AWS linux AMI with node 8 for lambda. Since Amazon has discontinued node 8 in lambda I have been working on transitioning to node 10 which now uses the Amazon linux 2. Since upgrading I have been unable to get past the error: socket hang up issue.
Version sets
Node v10.18.1
chrome-aws-lambda 2.0.2
puppeteer 2.0.0
Amazon Linux release 2 (Karoo)
Snippet of code:
console.log('start 1')
try {
// create the browser session and page. Then go to url
const browser = await puppeteer.launch({
// devtools: true
args: chrome.args,
defaultViewport: chrome.defaultViewport,
executablePath: await chrome.executablePath,
headless: chrome.headless,
})
console.log('start 2')
const page = await browser.newPage()
console.log('starting browser logic')
// set page timeout out milisecods, currently 2
page.setDefaultTimeout(pageTimeOut)
// goes to webpage waits for network traffic to die off
const [startPage] = await Promise.all([
page.goto(url),
page.waitForNavigation({waitUntil: "networkidle0"})
])
Error:
The Error occurs at await puppeteer.launch
bash-4.2# node run.js
starting check: LoginCheck
start 1
ErrorEvent {
target:
WebSocket {
domain: null,
_events:
[Object: null prototype] { open: [Function], error: [Function] },
_eventsCount: 2,
_maxListeners: undefined,
readyState: 3,
protocol: '',
_binaryType: 'nodebuffer',
_closeFrameReceived: false,
_closeFrameSent: false,
_closeMessage: '',
_closeTimer: null,
_closeCode: 1006,
_extensions: {},
_receiver: null,
_sender: null,
_socket: null,
_isServer: false,
_redirects: 0,
url:
'ws://127.0.0.1:41553/devtools/browser/cd72d3b1-e70e-4a34-aa65-351ef1857587',
_req: null },
type: 'error',
message: 'socket hang up',
error:
{ Error: socket hang up
at createHangUpError (_http_client.js:323:15)
at Socket.socketOnEnd (_http_client.js:426:23)
at Socket.emit (events.js:203:15)
at Socket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' } }

I was able to resolve this issue by installing the following AWS linux 2 libraries.
pango.x86_64 libXcomposite.x86_64 libXcursor.x86_64 libXdamage.x86_64 libXext.x86_64 libXi.x86_64 libXtst.x86_64 cups-libs.x86_64 libXScrnSaver.x86_64 libXrandr.x86_64 alsa-lib.x86_64 gtk3.x86_64 xorg-x11-fonts-100dpi xorg-x11-utils xorg-x11-fonts-Type1 xorg-x11-fonts-misc xorg-x11-fonts-cyrillic xorg-x11-fonts-75dpi ipa-gothic-fonts atk.x86_64 GConf2.x86_64 avahi.x86_64

Related

Sendgrid on FB Cloud functions, random requests fail with timeout error

This issue has been happening for a few weeks, some of our emails are not going out, and the SendGrid SDK (#sendgrid/mail for NodeJs) errors out with timeout, we have bumped the timeout time to 20000ms, but we still have some failures.
The way our scheduler system works we save a document in a document DB that holds the metadata for the email and that holds the ID of a GCP task, the GCP task calls happen 100%, and depending on the result it marks the document in the DB as "complete" or "failed", so we notice that those tasks that should be sending an email are marked as failed, and checking our Sentry logs those always have a "timeout" error.
The intriguing part is that some of those errored tasks did actually send an email, so we need to be safe on how to retry those on failure to avoid cases where we send the same email multiple times.
A few key points:
If we don't set a "timeout" value in the Sendgrid config, it won't error out, so the task will be marked as complete even if no email goes out.
Over 95% of our email tasks go out correctly, we were not able to link it to a time of the day or type of the email payload or specific email address, everything looks random, but I am sure there is a reason for those failures
Could it be related to it running in a cloud function? The stack runs on Firebase cloud functions, we have a few debug logs to help it and all the logs show up correctly and the function finalizes the execution correctly
Example of our sender method:
async function sendMail(templateName, msg, worker = {}) {
try {
const API_KEY = functions.config().sendgrid.key;
sgMail.setApiKey(API_KEY);
// https://github.com/sendgrid/sendgrid-nodejs/tree/main/docs/use-cases
sgMail.setTimeout(20000);
const env = getEnvironment();
if (env === "development") {
msg.to = "our-test-email#gmail.com";
}
msg.templateId = functions.config().sendgrid[templateName];
const response = await sgMail.send(msg);
console.log(`DEBUG: ${msg.to} Sendgrid response:`, response);
return response;
} catch (error) {
if (error.response && error.response.body) {
console.error(error.response.body);
} else {
console.log(error);
}
Sentry.withScope(function (scope) {
!isEmpty(worker) &&
scope.setUser({
firstName: user.firstName,
lastName: user.lastName,
});
Sentry.captureException(error);
});
// Throw back the error to the caller to mark the task as failed
throw new functions.https.HttpsError(
"internal",
msg.to ? `Error sending email to ${msg.to}` : "Error sending email",
error
);
}
}
One example of the error:
Error: timeout of 20000ms exceeded
File "/layers/google.nodejs.yarn/yarn_modules/node_modules/#sendgrid/client/node_modules/axios/lib/core/createError.js", line 16, col 15, in createError
var error = new Error(message);
File "/layers/google.nodejs.yarn/yarn_modules/node_modules/#sendgrid/client/node_modules/axios/lib/adapters/http.js", line 303, col 16, in RedirectableRequest.handleRequestTimeout
reject(createError(
File "node:events", line 527, col 28, in RedirectableRequest.emit
File "node:domain", line 537, col 15, in RedirectableRequest.emit
File "/layers/google.nodejs.yarn/yarn_modules/node_modules/follow-redirects/index.js", line 164, col 12, in Timeout.<anonymous>
self.emit("timeout");
File "node:internal/timers", line 559, col 17, in listOnTimeout
File "node:internal/timers", line 502, col 7, in processTimers
Extra response metadata:
scheduler/sender -> sendCompanyEmail() Company: FgTgemCNhhAscqTTMTWa Message: EMAIL HttpsError: Error sending email to ops-dev#trabapro.com
at sendMail (/workspace/lib/src/helpers/index.js:67:15)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /workspace/lib/src/scheduler/sender.js:306:17
at async Promise.all (index 0)
at async sendCompanyEmail (/workspace/lib/src/scheduler/sender.js:303:13) {
code: 'internal',
details: Error: timeout of 20000ms exceeded
at createError (/layers/google.nodejs.yarn/yarn_modules/node_modules/#sendgrid/client/node_modules/axios/lib/core/createError.js:16:15)
at RedirectableRequest.handleRequestTimeout (/layers/google.nodejs.yarn/yarn_modules/node_modules/#sendgrid/client/node_modules/axios/lib/adapters/http.js:303:16)
at RedirectableRequest.emit (node:events:527:28)
at RedirectableRequest.emit (node:domain:537:15)
at Timeout.<anonymous> (/layers/google.nodejs.yarn/yarn_modules/node_modules/follow-redirects/index.js:164:12)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
config: {
url: '/v3/mail/send',
method: 'post',
data: ...
...
baseURL: 'https://api.sendgrid.com/',
transformRequest: [Array],
transformResponse: [Array],
timeout: 20000,
adapter: [Function: httpAdapter],
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: Infinity,
maxBodyLength: Infinity,
validateStatus: [Function: validateStatus],
transitional: [Object]
},
code: 'ECONNABORTED',
request: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 1557,
_requestBodyBuffers: [Array],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [ClientRequest],
_currentUrl: 'https://api.sendgrid.com/v3/mail/send',
_timeout: null,
[Symbol(kCapture)]: false
},
response: undefined,
isAxiosError: true,
toJSON: [Function: toJSON]
},
httpErrorCode: { canonicalName: 'INTERNAL', status: 500 }
```

Tunnel-SSH doesn't connect to server successful in node application

I'm trying to connect to a cloud server that runs my MongoDB from my local machine. I'm using tunnel-ssh within the Node.js application I'm creating, however, I seem to have multiple problems and I don't fully understand what's going on.
Problems
I'm not 100% sure I'm successfully connecting to the server. There's no error, however, when I console.log(server) it says _connections: 0,. see full log below.
If I am connecting and then I try to run the getDataFromMongoDB function it returns the error, EADDRINUSE: address already in use 127.0.0.1:27000.
I've been trying to wrap my head around this all day and I'm not getting anywhere. Please help.
Error 1 - Is server connecting
Server {
_events:
[Object: null prototype] { connection: [Function], close: [Function] },
_eventsCount: 2,
_maxListeners: undefined,
_connections: 0,
_handle:
TCP {
reading: false,
onread: null,
onconnection: [Function: onconnection],
[Symbol(owner)]: [Circular] },
_usingWorkers: false,
_workers: [],
_unref: false,
allowHalfOpen: false,
pauseOnConnect: false,
_connectionKey: '4:127.0.0.1:27017',
[Symbol(asyncId)]: 7 }
Code
var config = {
username: "root",
Password: "password on the server",
host: "server IP address",
port: 22,
dstHost: "127.0.0.1",
dstPort: 27017,
localHost: "127.0.0.1",
localPort: 27000
};
var tnl = tunnel(config, function(error, tnl) {
if (error) {
console.log(error);
}
// yourClient.connect();
// yourClient.disconnect();
console.log(tnl);
getDataFromMongoDB();
});
async function getDataFromMongoDB(page) {
const MongoClient = require("mongodb").MongoClient;
const uri = "mongodb://USRNAME:PASSWORD_FOR_MONGDB_DATABASE#localhost:27017";
const client2 = new MongoClient(uri, { useNewUrlParser: true });
const client = await connectToMongodb(client2);
const collection = client.db("my_database_name").collection("jobs");
const jobs = await collection.find().toArray();
console.log("jobs", jobs);
}
function connectToMongodb(client) {
return new Promise((resolve, reject) => {
client.connect(function(err) {
console.log("connected", err);
return resolve(client);
});
});
}

docker compose networking with axios (node) error ENOTFOUND

I have a docker compose stack with a few containers. The two in question are an extended python:3-onbuild container (see base image here) with a falcon webserver running and a basic node:8.11-alpine container that is attempting to make post requests to the python webserver. This is a simplified version of my compose file:
version: '3.6'
services:
app: # python:3-onbuild
ports:
- 5000
build:
context: ../../
dockerfile: infra/docker/app.Dockerfile
lambda: # node:8.11-alpine
ports:
- 10000
build:
context: ../../
dockerfile: infra/docker/lambda.Dockerfile
depends_on:
- app
I know the networking is working because if I ssh into the lambda container
docker exec -it default_lambda_1 /bin/ash
and run ping app I get a response back.
$ ping app
PING app (172.18.0.4): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.184 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.141 ms
I can even run ping app:5000
ping app:5000
PING app:5000 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.176 ms
I even created a /status endpoint on my api to attempt to work through this issue. It queries the names of all the tables in the database, so it should fail if the database isn't ready. But, if I run curl http://app:5000/status I get a response back with all my table names, no problem at all.
The problem only arises when my node process attempts to make a post request (using axios)
this.endpointUrl = 'http://app:5000';
const options: AxiosRequestConfig = {
method: 'POST',
url: `${this.endpointUrl}/session/get`,
data: {
'client': 'alexa'
},
responseType: 'json'
};
axios(options)
.then((response: AxiosResponse<any>) => {
console.log(`[SessionService][getSession]: "/session/get" responseData = ${JSON.stringify(response.data)}`);
})
.catch(err => {
console.error(`[SessionService][getSession]: error = ${err}`);
});
I get the following error:
console.error src/index.ts:111
VirtualConsole.on.e at project/node_modules/jsdom/lib/jsdom/virtual-console.js:29:45
Error: Error: getaddrinfo ENOTFOUND app app:5000
at Object.dispatchError (/project/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:65:19)
at Request.client.on.err (/project/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:676:20)
at emitOne (events.js:121:20)
at Request.emit (events.js:211:7)
at Request.onRequestError (/project/node_modules/request/request.js:878:8)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7) undefined
console.error src/index.ts:111
axios_1.default.then.catch.err at services/Service.ts:94:25
[SessionService][getSession]: error = Error: Network Error
I learned more about this ENOTFOUND error message from here
getaddrinfo is by definition a DNS issue
So then, I looked into using the node dns packge to look up app
> dns.lookup('app', console.log)
GetAddrInfoReqWrap {
callback: [Function: bound consoleCall],
family: 0,
hostname: 'app',
oncomplete: [Function: onlookup],
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] } }
> null '172.18.0.3' 4
> dns.lookup('app:5000', console.log)
GetAddrInfoReqWrap {
callback: [Function: bound consoleCall],
family: 0,
hostname: 'app:5000',
oncomplete: [Function: onlookup],
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] } }
> { Error: getaddrinfo ENOTFOUND app:5000
at errnoException (dns.js:50:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
code: 'ENOTFOUND',
errno: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'app:5000' }
So it looks like it can find app but not app:5000? Weird. Since finding this out, I tried changing the python webserver port to 80 so I could make the axios requests to plain ol' http://app, but that resulted in the same ENOTFOUND error.
I cannot find much information on how to fix this. One idea is from here
the problem is that http.request uses dns.lookup instead of dns.resolve
The answer suggests using an overlay network. I don't really understand what that is, but if it is required, I'll go ahead and do that. But, I'm wondering if there is a more simple solution now that I'm using compose yml version 3.6. In any case, dns.resolve gives me similar output to dns.lookup
> dns.resolve('app', console.log)
QueryReqWrap {
bindingName: 'queryA',
callback: [Function: bound consoleCall],
hostname: 'app',
oncomplete: [Function: onresolve],
ttl: false,
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] },
channel: ChannelWrap {} }
> null [ '172.18.0.3' ]
> dns.resolve('app:5000', console.log)
QueryReqWrap {
bindingName: 'queryA',
callback: [Function: bound consoleCall],
hostname: 'app:5000',
oncomplete: [Function: onresolve],
ttl: false,
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] },
channel: ChannelWrap {} }
> { Error: queryA ENOTFOUND app:5000
at errnoException (dns.js:50:10)
at QueryReqWrap.onresolve [as oncomplete] (dns.js:238:19)
code: 'ENOTFOUND',
errno: 'ENOTFOUND',
syscall: 'queryA',
hostname: 'app:5000' }
EDIT: By the way, I can make the same POST requests from the lambda container across https to the outside world (a production server running the same code as the app service)
EDIT: If I remove http:// from the endpointUrl as suggested here, I get Error: Invalid protocol: app
EDIT: I thought it might be related to this issue with the node-alpine base image, but changing to node:8.11 or node:carbon resulted in the same DNS issue ENOTFOUND
EDIT: I'm sure it is not a timing issue because I'm waiting to run my tests for 100 seconds, making a test request every 10 seconds. If I skip the test step and let the container spin up like normal, I'm able to curl app from inside the lambda container far before a 100-second mark...
The problem ended up being with my test runner, jest.
The default environment in Jest is a browser-like environment through jsdom. If you are building a node service, you can use the node option to use a node-like environment instead.
To fix this I had to add this in my jest.config.js file:
{
// ...
testEnvironment: 'node'
// ...
}
Source: https://github.com/axios/axios/issues/1418

YouTube API TypeError: callback is not a function

I have tried to get collection of search results like this with nodejs https://developers.google.com/youtube/v3/docs/search/list#usage
and there is function
function searchListByKeyword(auth, requestData) {
var service = google.youtube('v3');
var parameters = removeEmptyParameters(requestData['params']);
parameters['auth'] = auth;
service.search.list(parameters, function(err, response) {
if (err) {
console.log('The API returned an error: ' + err);
return;
}
console.log(response);
});
}
the fucntion must return a collection of search results, but in my case it returns Request
$ node quickstart
Request {
_events:
{ error: [Function: bound ],
complete: [Function: bound ],
pipe: [Function] },
_eventsCount: 3,
_maxListeners: undefined,
url: 'https://www.googleapis.com/youtube/v3/search',
method: 'GET',
paramsSerializer: [Function],
headers:
{ 'Accept-Encoding': 'gzip',
'User-Agent': 'google-api-nodejs-client/28.1.0 (gzip) google-api-nodejs-client/0.12.0',
Authorization: 'Bearer ya29.GluVBXvsx3nP15QlV-MqcypX1eEvDQ3BuXttbfQyR7ZlTpfO26sZnrh7Sl0sWsOCdGsJWHpHtE9XJv0jDqoWmVOZoJbQLqqQs3ujIPlATDQBHxu5nNorO8JBCE6y',
host: 'www.googleapis.com' },
params: { maxResults: '25', part: 'snippet', q: 'surfing', type: 'qwe' },
maxContentLength: 2147483648,
validateStatus: [Function],
uri:
Url {
protocol: 'https:',
slashes: true,
and after all this throws an error
C:\Users\wwwba\Desktop\youtube\node_modules\google-auth-library\lib\auth\oauth2client.js:341
callback(err, result, response);
^
TypeError: callback is not a function
at OAuth2Client.postRequest (C:\Users\wwwba\Desktop\youtube\node_modules\google-auth-library\lib\auth\oauth2client.js:341:9)
at postRequestCb (C:\Users\wwwba\Desktop\youtube\node_modules\google-auth-library\lib\auth\oauth2client.js:297:23)
at Request._callback (C:\Users\wwwba\Desktop\youtube\node_modules\google-auth-library\lib\transporters.js:113:17)
at Request.self.callback (C:\Users\wwwba\Desktop\youtube\node_modules\request\request.js:186:22)
at Request.emit (events.js:180:13)
at Request.<anonymous> (C:\Users\wwwba\Desktop\youtube\node_modules\request\request.js:1163:10)
at Request.emit (events.js:180:13)
at IncomingMessage.<anonymous> (C:\Users\wwwba\Desktop\youtube\node_modules\request\request.js:1085:12)
at Object.onceWrapper (events.js:272:13)
at IncomingMessage.emit (events.js:185:15)
can anyone tell me what i'm doing wrong. Thanks
the solution is simple than i think.
i uninstall googleapis and google-auth-library and install their old versions
"google-auth-library": "^0.12.0",
"googleapis": "^21.3.0"
google, rewrite your api for new versions or warn that we MUST INSTALL OLD VERSIONS

Elastic search gives Bad request for ping

Code in elasticsearch.js file
function es() {
throw new Error('Looks like you are expecting the previous "elasticsearch" module. ' +
'It is now the "es" module. To create a client with this module use ' +
'`new es.Client(params)`.');
}
es.Client = require('./lib/client');
es.ConnectionPool = require('./lib/connection_pool');
es.Transport = require('./lib/transport');
es.errors = require('./lib/errors');
module.exports = es;
var elasticsearch = require('elasticsearch')
var client = new es.Client({
host: 'localhost:9200',
log: 'trace',
})
// Ping the cluster
client.ping({
requestTimeOut: 30000,
},
function(error){
if(error) {
console.log(error)
console.error("elasticsearch cluster is down!")
}
else {
console.log("All is well")
}
})
and I am running elastic search locally with command $bin/elasticsearch
but when I do $node elasticsearch.js it gives the error saying
Elasticsearch INFO: 2018-01-22T11:17:50Z
Adding connection to http://localhost:9200/
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
starting request {
"method": "HEAD",
"requestTimeout": 3000,
"castExists": true,
"path": "/",
"query": {
"requestTimeOut": 30000
}
}
Elasticsearch TRACE: 2018-01-22T11:17:50Z
-> HEAD http://localhost:9200/?requestTimeOut=30000
<- 400
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
Request complete
{ Error: Bad Request
at respond (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:307:15)
at checkRespForFailure (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:266:7)
at HttpConnector.<anonymous> (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/Users/ElasticSearchServer/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
status: 400,
displayName: 'BadRequest',
message: 'Bad Request',
path: '/',
query: { requestTimeOut: 30000 },
body: undefined,
statusCode: 400,
response: '',
toString: [Function],
toJSON: [Function] }
elasticsearch cluster is down!
If I try adding new index, delete index, check the health or search, it works fine and gives the appropriate result.
Can anyone help me to fix the issue? thanks in advance!
In the new JavaScript client every option that is not intended for Elasticsearch lives in a second object, your code should be updated as follows:
'use strict'
const { Client } = require('#elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
client.ping({}, { requestTimeout: 20000 }, (err, response) => {
...
})
In the response object other than body, statusCode, and headers, you will also find a warnings array and a meta object, which should help you debug issues.
In this case, warnings contained the following message: 'Client - Unknown parameter: "requestTimeout", sending it as query parameter'.

Resources