I have a docker compose stack with a few containers. The two in question are an extended python:3-onbuild container (see base image here) with a falcon webserver running and a basic node:8.11-alpine container that is attempting to make post requests to the python webserver. This is a simplified version of my compose file:
version: '3.6'
services:
app: # python:3-onbuild
ports:
- 5000
build:
context: ../../
dockerfile: infra/docker/app.Dockerfile
lambda: # node:8.11-alpine
ports:
- 10000
build:
context: ../../
dockerfile: infra/docker/lambda.Dockerfile
depends_on:
- app
I know the networking is working because if I ssh into the lambda container
docker exec -it default_lambda_1 /bin/ash
and run ping app I get a response back.
$ ping app
PING app (172.18.0.4): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.184 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.141 ms
I can even run ping app:5000
ping app:5000
PING app:5000 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.176 ms
I even created a /status endpoint on my api to attempt to work through this issue. It queries the names of all the tables in the database, so it should fail if the database isn't ready. But, if I run curl http://app:5000/status I get a response back with all my table names, no problem at all.
The problem only arises when my node process attempts to make a post request (using axios)
this.endpointUrl = 'http://app:5000';
const options: AxiosRequestConfig = {
method: 'POST',
url: `${this.endpointUrl}/session/get`,
data: {
'client': 'alexa'
},
responseType: 'json'
};
axios(options)
.then((response: AxiosResponse<any>) => {
console.log(`[SessionService][getSession]: "/session/get" responseData = ${JSON.stringify(response.data)}`);
})
.catch(err => {
console.error(`[SessionService][getSession]: error = ${err}`);
});
I get the following error:
console.error src/index.ts:111
VirtualConsole.on.e at project/node_modules/jsdom/lib/jsdom/virtual-console.js:29:45
Error: Error: getaddrinfo ENOTFOUND app app:5000
at Object.dispatchError (/project/node_modules/jsdom/lib/jsdom/living/xhr-utils.js:65:19)
at Request.client.on.err (/project/node_modules/jsdom/lib/jsdom/living/xmlhttprequest.js:676:20)
at emitOne (events.js:121:20)
at Request.emit (events.js:211:7)
at Request.onRequestError (/project/node_modules/request/request.js:878:8)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7) undefined
console.error src/index.ts:111
axios_1.default.then.catch.err at services/Service.ts:94:25
[SessionService][getSession]: error = Error: Network Error
I learned more about this ENOTFOUND error message from here
getaddrinfo is by definition a DNS issue
So then, I looked into using the node dns packge to look up app
> dns.lookup('app', console.log)
GetAddrInfoReqWrap {
callback: [Function: bound consoleCall],
family: 0,
hostname: 'app',
oncomplete: [Function: onlookup],
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] } }
> null '172.18.0.3' 4
> dns.lookup('app:5000', console.log)
GetAddrInfoReqWrap {
callback: [Function: bound consoleCall],
family: 0,
hostname: 'app:5000',
oncomplete: [Function: onlookup],
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] } }
> { Error: getaddrinfo ENOTFOUND app:5000
at errnoException (dns.js:50:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
code: 'ENOTFOUND',
errno: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'app:5000' }
So it looks like it can find app but not app:5000? Weird. Since finding this out, I tried changing the python webserver port to 80 so I could make the axios requests to plain ol' http://app, but that resulted in the same ENOTFOUND error.
I cannot find much information on how to fix this. One idea is from here
the problem is that http.request uses dns.lookup instead of dns.resolve
The answer suggests using an overlay network. I don't really understand what that is, but if it is required, I'll go ahead and do that. But, I'm wondering if there is a more simple solution now that I'm using compose yml version 3.6. In any case, dns.resolve gives me similar output to dns.lookup
> dns.resolve('app', console.log)
QueryReqWrap {
bindingName: 'queryA',
callback: [Function: bound consoleCall],
hostname: 'app',
oncomplete: [Function: onresolve],
ttl: false,
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] },
channel: ChannelWrap {} }
> null [ '172.18.0.3' ]
> dns.resolve('app:5000', console.log)
QueryReqWrap {
bindingName: 'queryA',
callback: [Function: bound consoleCall],
hostname: 'app:5000',
oncomplete: [Function: onresolve],
ttl: false,
domain:
Domain {
domain: null,
_events: { error: [Function: debugDomainError] },
_eventsCount: 1,
_maxListeners: undefined,
members: [] },
channel: ChannelWrap {} }
> { Error: queryA ENOTFOUND app:5000
at errnoException (dns.js:50:10)
at QueryReqWrap.onresolve [as oncomplete] (dns.js:238:19)
code: 'ENOTFOUND',
errno: 'ENOTFOUND',
syscall: 'queryA',
hostname: 'app:5000' }
EDIT: By the way, I can make the same POST requests from the lambda container across https to the outside world (a production server running the same code as the app service)
EDIT: If I remove http:// from the endpointUrl as suggested here, I get Error: Invalid protocol: app
EDIT: I thought it might be related to this issue with the node-alpine base image, but changing to node:8.11 or node:carbon resulted in the same DNS issue ENOTFOUND
EDIT: I'm sure it is not a timing issue because I'm waiting to run my tests for 100 seconds, making a test request every 10 seconds. If I skip the test step and let the container spin up like normal, I'm able to curl app from inside the lambda container far before a 100-second mark...
The problem ended up being with my test runner, jest.
The default environment in Jest is a browser-like environment through jsdom. If you are building a node service, you can use the node option to use a node-like environment instead.
To fix this I had to add this in my jest.config.js file:
{
// ...
testEnvironment: 'node'
// ...
}
Source: https://github.com/axios/axios/issues/1418
Related
I have a Node.js server running on an Ubuntu-20.04 Virtual Machine.
I'm using docker compose to setup a mysql container with a production database.
My docker-compose.yml file is like so,
prod_db:
image: mysql:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PRODUCTION_PASSWORD}
MYSQL_DATABASE: ${MYSQL_PRODUCTION_DATABASE}
ports:
- ${MYSQL_PRODUCTION_PORT}:3306
Running docker compose up on it appears to work fine,
lockers-prod_db-1 | 2022-08-08T19:18:03.005576Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
And docker container list yields the following,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33289149af9f mysql:latest "docker-entrypoint.s…" 37 seconds ago Up 35 seconds 33060/tcp, 0.0.0.0:3308->3306/tcp, :::3308->3306/tcp lockers-prod_db-1
But yet when attempting to connect via the Sequelize with the following code,
config = {
id: 'production',
port: process.env.NODE_PORT,
sqlConfig: {
port: parseInt(process.env.MYSQL_PRODUCTION_PORT),
host: process.env.MYSQL_PRODUCTION_HOST,
user: process.env.MYSQL_PRODUCTION_USER,
password: process.env.MYSQL_PRODUCTION_PASSWORD,
database: process.env.MYSQL_PRODUCTION_DATABASE,
locationsCsvPath: process.env.LOCATIONS_CSV_ABSOLUTE_PATH,
lockersCsvPath: process.env.LOCKERS_CSV_ABSOLUTE_PATH,
contractsCsvPath: process.env.CONTRACTS_CSV_ABSOLUTE_PATH
}
const sequelize = new Sequelize({
dialect: 'mysql',
host: config.sqlConfig.host,
port: config.sqlConfig.port,
password: config.sqlConfig.password,
username: config.sqlConfig.user,
database: config.sqlConfig.database,
models: [Contract, Location, Locker],
logging: false
});
I get the following error,
/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102
throw new SequelizeErrors.ConnectionError(err);
^
ConnectionError [SequelizeConnectionError]: connect ETIMEDOUT
at ConnectionManager.connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async ConnectionManager._connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:220:24)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:174:32
at async ConnectionManager.getConnection (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:197:7)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:301:26
at async MySQLQueryInterface.tableExists (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/query-interface.js:102:17)
at async Function.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/model.js:939:21)
at async Sequelize.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:373:9) {
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
},
original: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
}
}
I'm running this on a virtual machine, this works perfectly locally though. The main difference is that on the VM an apache2 instance is running. I'm starting to think it may be redirecting the TPC requests to the container to another address because it's setup as a reverse proxy. Could this be a possibility ?
Node Version - v12.16.1
NPM Version- 6.13.4
I am using below code in Nodejs to get VM's list from google cloud using google cloud compute library. Following this link - https://github.com/googleapis/nodejs-compute#before-you-begin
// By default, the client will authenticate using the service account file
// specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable and use
// the project specified by the GCLOUD_PROJECT environment variable. See
// https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
const Compute = require('#google-cloud/compute');
rejectUnauthorized: false;//add when working with https sites
requestCert: false;//add when working with https sites
agent: false;//add when working with https sites
// Creates a client
const compute = new Compute();
async function getVmsExample() {
// In this example we only want one VM per page
const options = {
maxResults: 1,
};
const vms = await compute.getVMs(options);
return vms;
}
// Run the examples
exports.main = async () => {
const vms = await getVmsExample().catch(console.error);
if (vms) console.log('VMs:', vms);
return vms;
};
if (module === require.main) {
exports.main(console.log);
}
I have already fulfilled all the prerequisites but whwnever I run the code I get the below error-
FetchError: request to https://www.googleapis.com/oauth2/v4/token failed, reason: unable to get local issuer certificate
at ClientRequest.<anonymous> (C:\Users\username\Desktop\Full-Stack\NodeJS\node-examples\node_modules\node-fetch\lib\index.js:1455:11)
at ClientRequest.emit (events.js:311:20)
at TLSSocket.socketErrorListener (_http_client.js:426:9)
at TLSSocket.emit (events.js:311:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
message: 'request to https://www.googleapis.com/oauth2/v4/token failed, reason: unable to get local issuer certificate',
type: 'system',
errno: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY',
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY',
config: {
method: 'POST',
url: 'https://www.googleapis.com/oauth2/v4/token',
data: {
grant_type: 'urn:ietf:params:oauth:grant-type:jwt-bearer',
assertion: 'eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJub2RlanNhY2NvdW50QGNvZ2VudC1jYXNlLTI0MjAxNC5pYW0uZ3NlcnZpY2VhY2NvdW50LmNvbSIsInNjb3BlIjoiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vYXV0aC9jb21wdXRlIiwiYXVkIjoiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3Y0L3Rva2VuIiwiZXhwIjoxNTg3OTU4NDcwLCJpYXQiOjE1ODc5NTQ4NzB9.QSn0bSjHtph4aHGZcXIkWhbbUxampHSOE1BsDkI8dZOah12ICHFOZV0zwrngCPbTMr4MIfTAE7s8fLESjCUEq7lPSvB0uTqU5Lr3fI4FUUEqOGp56821Lh68Z8stWmKb-9HV85h7Ub0aSkJdnezYMcK_-FPu__a3ZLeP3lEnjJu9292DtctGT73XvHaeDTMFiHSI10BlJ2LIPds5lC6XM5I4f6W-4UH0VhUgLo1uCGxJJj0jnkQZbjp11l8KSwsMuIMFvug8G6Y5OKP1E4Ef1EKoEBFGC-vjIjaCPiqkFv4U1yh8xc7ShXh2MBQ8eyUZY1OvDNO4IXexQ-RoWBt0pQ'
},
headers: { 'Content-Type': 'application/json', Accept: 'application/json' },
responseType: 'json',
params: [Object: null prototype] {},
paramsSerializer: [Function: paramsSerializer],
body: '{"grant_type":"urn:ietf:params:oauth:grant-type:jwt-bearer","assertion":"eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJub2RlanNhY2NvdW50QGNvZ2VudC1jYXNlLTI0MjAxNC5pYW0uZ3NlcnZpY2VhY2NvdW50LmNvbSIsInNjb3BlIjoiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vYXV0aC9jb21wdXRlIiwiYXVkIjoiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3Y0L3Rva2VuIiwiZXhwIjoxNTg3OTU4NDcwLCJpYXQiOjE1ODc5NTQ4NzB9.QSn0bSjHtph4aHGZcXIkWhbbUxampHSOE1BsDkI8dZOah12ICHFOZV0zwrngCPbTMr4MIfTAE7s8fLESjCUEq7lPSvB0uTqU5Lr3fI4FUUEqOGp56821Lh68Z8stWmKb-9HV85h7Ub0aSkJdnezYMcK_-FPu__a3ZLeP3lEnjJu9292DtctGT73XvHaeDTMFiHSI10BlJ2LIPds5lC6XM5I4f6W-4UH0VhUgLo1uCGxJJj0jnkQZbjp11l8KSwsMuIMFvug8G6Y5OKP1E4Ef1EKoEBFGC-vjIjaCPiqkFv4U1yh8xc7ShXh2MBQ8eyUZY1OvDNO4IXexQ-RoWBt0pQ"}', validateStatus: [Function: validateStatus]
}
}
I tried npm config set strict-ssl false also.
Have anyone any idea what is wrong?
Thanks for help!
When you initiate the client you can either set strictSSL to false ( you did ) or pass in the new valid certificates.
Set strictSSL to false ( which you already did ) and then update cert files (you should be able to export them here - https://baltimore-cybertrust-root.chain-demos.digicert.com/)
This link http://registry.npmjs.org/npm could be block by IT admin in your organization. ( Please Verify )
In addition,You can refer to this stack overflow case as reference for the fix with secure manner and some various alternatives which might assist you for getting direction for the solution.
So I've decided to set up my project with two droplets, a MongoDB image droplet and a NodeJS image droplet. I did so to make it easier to scale both droplets in the future, and other applications may be connecting to the DB in the future, both on Ubuntu 18.04.
I'm getting the error of:
Could not connect to the database: { MongooseServerSelectionError: connection timed out
at new MongooseServerSelectionError (/root/eternal-peace-code/node_modules/mongoose/lib/error/serverSelection.js:22:11)
at NativeConnection.Connection.openUri (/root/eternal-peace-code/node_modules/mongoose/lib/connection.js:808:32)
at Mongoose.connect (/root/eternal-peace-code/node_modules/mongoose/lib/index.js:333:15)
at Timeout.setTimeout [as _onTimeout] (/root/eternal-peace-code/app.js:60:22)
at ontimeout (timers.js:482:11)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5)
message: 'connection timed out',
name: 'MongooseServerSelectionError',
reason:
TopologyDescription {
type: 'Single',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers: Map { 'mongo_droplet_ip:27017' => [Object] },
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null },
[Symbol(mongoErrorContextSymbol)]: {} }
I have set up both servers correctly (so I believe), my MongoDB has two users (both are me but different permissions, I have followed several walkthroughs and so it just happened). My /etc/mongod.conf file has been edited accordingly:
net:
port: 27017
bindIp: 127.0.0.1,mongodb_droplet_ip
security:
authorization: "enabled"
The UFW firewall has HTTPS, SSH and my NodeJS_ip:27017 enabled.
My NodeJS droplet is set up with a domain name pointing to the right IP address, letsencrypt is set up for secure connection at the domain name. The issue I have now is that I can not connect to my Mongo droplet from my NodeJS application, and I would also like to make sure that it is all done securely too.
My NodeJS connect code is using Mongoose and a variables environment file, I also have the whole connect in a timeout function as per another suggestion I saw elsewhere:
setTimeout(async () => {
console.log('Connecting to database...');
const options = {
user: process.env.DBUSER,
pass: process.env.USERPASS,
keepAlive: true,
useNewUrlParser: true,
useFindAndModify: false,
useCreateIndex: true,
useUnifiedTopology: true,
sslCA: cert,
connectTimeoutMS: 5000,
}
await mongoose.connect(process.env.LIVEDB, options)
.then(() => {
console.log('We are connected to the database');
})
.catch(err => {
console.log('Could not connect to the database: ', err);
connectWithTimeout();
});
}, 3000);
I have tried a multitude of things through both droplets but I have wasted around 4 days on getting the project to a staging/production stage so it can launch whenever and just be updated as time goes on.
If you need anything else, please do let me know.
All help would be hugely appreciated,
Thanks
I don't know the URI format you are using. But I got the same issue initially when I tried to connect my node application with Mongo instance of AWS.
Using the long URI sting with all the cluster name like this
mongoURI = 'mongodb://username:password#mongo-instance-shard-00-00-a4iv8.mongodb.net:27017,mongo-instance-shard-00-01-a4iv8.mongodb.net:27017,mongo-instance-shard-00-02-a4iv8.mongodb.net:27017/test?ssl=true&replicaSet=mongo-instance-shard-0&authSource=admin&retryWrites=true&w=majority'
Instead of something like this:
mongodb+srv://server.example.com/
This worked for me. It might help you as well.
Also, found this for digital ocean link
and mongo documentation for the connection string
Background:
I have this code working in AWS linux AMI with node 8 for lambda. Since Amazon has discontinued node 8 in lambda I have been working on transitioning to node 10 which now uses the Amazon linux 2. Since upgrading I have been unable to get past the error: socket hang up issue.
Version sets
Node v10.18.1
chrome-aws-lambda 2.0.2
puppeteer 2.0.0
Amazon Linux release 2 (Karoo)
Snippet of code:
console.log('start 1')
try {
// create the browser session and page. Then go to url
const browser = await puppeteer.launch({
// devtools: true
args: chrome.args,
defaultViewport: chrome.defaultViewport,
executablePath: await chrome.executablePath,
headless: chrome.headless,
})
console.log('start 2')
const page = await browser.newPage()
console.log('starting browser logic')
// set page timeout out milisecods, currently 2
page.setDefaultTimeout(pageTimeOut)
// goes to webpage waits for network traffic to die off
const [startPage] = await Promise.all([
page.goto(url),
page.waitForNavigation({waitUntil: "networkidle0"})
])
Error:
The Error occurs at await puppeteer.launch
bash-4.2# node run.js
starting check: LoginCheck
start 1
ErrorEvent {
target:
WebSocket {
domain: null,
_events:
[Object: null prototype] { open: [Function], error: [Function] },
_eventsCount: 2,
_maxListeners: undefined,
readyState: 3,
protocol: '',
_binaryType: 'nodebuffer',
_closeFrameReceived: false,
_closeFrameSent: false,
_closeMessage: '',
_closeTimer: null,
_closeCode: 1006,
_extensions: {},
_receiver: null,
_sender: null,
_socket: null,
_isServer: false,
_redirects: 0,
url:
'ws://127.0.0.1:41553/devtools/browser/cd72d3b1-e70e-4a34-aa65-351ef1857587',
_req: null },
type: 'error',
message: 'socket hang up',
error:
{ Error: socket hang up
at createHangUpError (_http_client.js:323:15)
at Socket.socketOnEnd (_http_client.js:426:23)
at Socket.emit (events.js:203:15)
at Socket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' } }
I was able to resolve this issue by installing the following AWS linux 2 libraries.
pango.x86_64 libXcomposite.x86_64 libXcursor.x86_64 libXdamage.x86_64 libXext.x86_64 libXi.x86_64 libXtst.x86_64 cups-libs.x86_64 libXScrnSaver.x86_64 libXrandr.x86_64 alsa-lib.x86_64 gtk3.x86_64 xorg-x11-fonts-100dpi xorg-x11-utils xorg-x11-fonts-Type1 xorg-x11-fonts-misc xorg-x11-fonts-cyrillic xorg-x11-fonts-75dpi ipa-gothic-fonts atk.x86_64 GConf2.x86_64 avahi.x86_64
I coded in my local dev environment a node.js function that does several requests to an external url-uri (asynchronously using bluebird and request-promise). It works fine, the function gets the results and save the information into the EC3 database.
The problem comes when I deploy the code (node modules included), and execute it. It has access to the database, but when tries to access to external url-uri the 'request-promise' module gets an 'connect ETIMEDOUT' error.
I did all the AWS indicates to get it, and read and try a all the solutions I found in Stackoverflow, but still having the problem.
https://www.youtube.com/watch?v=AR1nt3iGR5o
The related role that runs the function has the following policies:
AWSLambdaFullAccess - AWSCodeDeployRoleForLambda - AmazonVPCFullAccess - AWSLambdaExecute - AWSLambdaBasicExecutionRole - AWSLambdaVPCAccessExecutionRole - AWSLambdaRole - oneClick_lambda_basic_execution_1535968782861
Function Network Config
Nat getway
Route Table
Could you help me please, or at least give a hint, please?
CODE:
const Promise = require('bluebird');
const Rp = require('request-promise');
const http = require('http');
var httpAgent = new http.Agent();
httpAgent.maxSockets = 15;
var promises = urls.map(function(url){
return Rp({uri: url.url, pool:httpAgent}).then(function(result){
url.result = result;
// Saving space
delete url.url;
return url;
})
});
Promise.all(promises).then(function(results){
return(processResults(results));
}).catch(Error, function (e) {
console.error("Error doing Request: ", e);
}).error(function (e) {
console.error("Unable get info: ", e);
}).then(function(results){
try{
product.callback(results);
}catch (exception) {
console.error('Error callback: ',exception);
}
}).then(function(){
product.finally();
});
ERROR:
2018-09-28T14:53:48.989Z efb5493a-c32d-11e8-ae42-f73dec33ca2a Error doing Request: { RequestError: Error: connect ETIMEDOUT 147.83.184.65:80
at new RequestError (/var/task/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/var/task/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/var/task/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/var/task/node_modules/request/request.js:185:22)
at emitOne (events.js:116:13)
at Request.emit (events.js:211:7)
at Request.onRequestError (/var/task/node_modules/request/request.js:881:8)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
name: 'RequestError',
message: 'Error: connect ETIMEDOUT 147.83.184.65:80',
cause:
{ Error: connect ETIMEDOUT 147.83.184.65:80
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '147.83.184.65',
port: 80 },
error:
{ Error: connect ETIMEDOUT 147.83.184.65:80
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '147.83.184.65',
port: 80 },
options:
{ uri: 'http://geoserver.hydsdev.net/geoserver/mhews/wms?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetFeatureInfo&FORMAT=image%2Fjpeg&TRANSPARENT=true&INFO_FORMAT=text%2Fxml&FEATURE_COUNT=50&X=50&Y=50&SRS=EPSG%3A4326&WIDTH=101&HEIGHT=101&QUERY_LAYERS=mhews:ffews_rain_accumulation_15min_opera&LAYERS=mhews:ffews_rain_accumulation_15min_opera&BBOX=0.6319608%2C42.770155%2C0.8319608%2C42.870155000000004&TIME=2018-09-28T17:30:00.000Z',
pool:
Agent {
domain: null,
_events: [Object],
_eventsCount: 1,
_maxListeners: undefined,
defaultPort: 80,
protocol: 'http:',
options: [Object],
requests: {},
sockets: {},
freeSockets: {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: 15,
maxFreeSockets: 256,
'http:': [Object] },
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
response: undefined }
Cheers.
Finally I fixed the problem. Don't know why but into 'request-promise' options object I have to put: headers: {'User-Agent':'request' } . Thank you very much #Rajesh!