Does aws-sdk for node js manage it's connections through an internal pool?
Their documentation kind of leads me to believe that.
httpOptions (map) — A set of options to pass to the low-level HTTP
request. Currently supported options are:
proxy [String] — the URL to proxy requests through agent [http.Agent,
https.Agent] — the Agent object to perform HTTP requests with. Used
for connection pooling. Defaults to the global agent
(http.globalAgent) for non-SSL connections. Note that for SSL
connections, a special Agent object is used in order to enable peer
certificate verification. This feature is only available in the
Node.js environment.
But there's no way, at least none that I could find, that'd let me define any connection pool properties.
What are my options if I want to control the concurrent connections in use?
Is it better to let the SDK handle that?
can give the http.Agent with whatever settings you want for max sockets.
var AWS = require('aws-sdk');
var http = require('http');
AWS.config.update({
httpOptions: {
agent: new http.Agent(...)
}
})
I have been looking into this a little bit more.
I dug around and figured out the defaults being used.
AWS-SDK is using the node http module, of which the defaultSocketCount is INFINITY.
They are using https module under the wraps with a maxSocketCount of 50.
The relevant code snippet.
sslAgent: function sslAgent() {
var https = require('https');
if (!AWS.NodeHttpClient.sslAgent) {
AWS.NodeHttpClient.sslAgent = new https.Agent({rejectUnauthorized: true});
AWS.NodeHttpClient.sslAgent.setMaxListeners(0);
// delegate maxSockets to globalAgent, set a default limit of 50 if current value is Infinity.
// Users can bypass this default by supplying their own Agent as part of SDK configuration.
Object.defineProperty(AWS.NodeHttpClient.sslAgent, 'maxSockets', {
enumerable: true,
get: function() {
var defaultMaxSockets = 50;
var globalAgent = https.globalAgent;
if (globalAgent && globalAgent.maxSockets !== Infinity && typeof globalAgent.maxSockets === 'number') {
return globalAgent.maxSockets;
}
return defaultMaxSockets;
}
});
}
return AWS.NodeHttpClient.sslAgent;
}
For manipulating the socket counts, see BretzL's answer.
There is however now way to set the agent for both http and https at once. You can work around this by updating the configuration as you switch from http to https and vice versa.
See : https://github.com/aws/aws-sdk-js/issues/1185
Related
The best way to solve this would be to update the SSL endpoint I'm trying to connect to but I don't have the ability too.
I'm trying to reach a SOAP endpoint (it's painful) for an application that is barily being maintained and thus probably won't be able to get the proper SSL patch.
It's sitting behind a proxy that is doing active SSL rewrites and could also be to blame for the error:
var request = require("request")
var soap = require("soap")
const fs = require('fs')
var specialRequest = request.defaults({
ca: fs.readFileSync("rewrite-example.pem")
})
var options = { request: specialRequest }
const WSDL = "https://SSL-rewrite.example?wsdl"
soap.createClient(WSDL, options, function(err, client) {
if(err) throw Error(err)
})
Error:
Uncaught TypeError: req.then is not a function
at HttpClient.request (../node_modules/soap/lib/http.js:191:13)
at Object.open_wsdl (../node_modules/soap/lib/wsdl/index.js:1271:20)
at openWsdl (../node_modules/soap/lib/soap.js:70:16)
at ../node_modules/soap/lib/soap.js:48:13
at _requestWSDL (../node_modules/soap/lib/soap.js:76:9)
at Object.createClient (../node_modules/soap/lib/soap.js:94:5)
> Uncaught: Error: write EPROTO C017726B8C7F0000:error:0A000152:SSL routines:final_renegotiate:unsafe legacy renegotiation disabled:../deps/openssl/openssl/ssl/statem/extensions.c:908
From what I found here, it's possible to create a custom OpenSSL config file allowing unsafe legacy renegotiation. And using Node's --openssl-config flag, it should be possible to "ignore" the renegotiation. I've tried writing a custom config file as written in the first link and passing it in but with no avail.
This question has been asked before, though reverting to an older verision of Node would not be ideal.
What might be some other wasys to resolve this?
As you already have found this error is coming from CVE-2009-3555, this is IIS issue, so it even won't be ignored using node flags. Since node 17 or 18 they removed OpenSSL option to accept legacy servers.
I thing better solution in your case is passing httpsAgent with option.
soap.js uses Axios as of v0.40.0 according to readme, so you should set request param like this:
const crypto = require('crypto')
const options = {
request: axios.create({
// axios options
httpsAgent: new https.Agent({
// for self signed you could also add
// rejectUnauthorized: false,
// allow legacy server
secureOptions: crypto.constants.SSL_OP_LEGACY_SERVER_CONNECT,
}),
}),
}
https.Agent's secureOptions is a numeric bitmask of the SSL_OP_* options.
In order to properly build my urls in my xml sitemaps and rss feeds I want to determine if the webpage is currently served over http or https, so it also works locally in development.
export default function handler(req, res) {
const host = req.headers.host;
const proto = req.connection.encrypted ? "https" : "http";
//construct url for xml sitemaps
}
With above code however also on Vercel it still shows as being served over http. I would expect it to run as https. Is there a better way to figure out http vs https?
As Next.js api routes run behind a proxy which is offloading to http the protocol is http.
By changing the code to the following I was able to first check at what protocol the proxy runs.
const proto = req.headers["x-forwarded-proto"];
However this will break the thing in development where you are not running behind a proxy, or a different way of deploying the solution that might also not involve a proxy. To support both use cases I eventually ended up with the following code.
const proto =
req.headers["x-forwarded-proto"] || req.connection.encrypted
? "https"
: "http";
Whenever the x-forwarded-proto header is not present (undefined) we fall back to req.connection.encrypted to determine if we should serve on http vs https.
Now it works on localhost as well a Vercel deployment.
my solution:
export const getServerSideProps: GetServerSideProps = async (context: any) => {
// Fetch data from external API
const reqUrl = context.req.headers["referer"];
const url = new URL(reqUrl);
console.log('====================================');
console.log(url.protocol); // http
console.log('====================================');
// const res = await fetch(`${origin}/api/projets`)
// const data = await res.json()
// Pass data to the page via props
return { props: { data } }
}
I was looking through my codebase today, the portion which sets up the server and found the following lines:
var https = require('https');
https.globalAgent.options.secureProtocol = 'TLSv1_2_method';
function createHttpsServer(app) {
var https = require('https');
var fs = require('fs');
const options = {
secureProtocol: 'TLSv1_2_method',
// ...
};
var server = https.createServer(options, app);
return server;
}
It looked like code duplication to me and I am not sure why these do different things (or do they?).
A colleague of mine told me that the top one is for controlling TLS in HTTPS requests made from NodeJS, which in turn, gives us access to the https.agent which is used for all things related to client HTTP requests.
This was also compared to the ServicePointManager in the .NET world.
So do these methods both do different things? At some point, our code does:
var server = protocol === 'https' ? createHttpsServer(app) : createHttpServer(app);
Wouldn't that be using the same server at the end of the day?
var server = protocol === 'https' ? createHttpsServer(app) : createHttpServer(app);
The above line creates the same server, the only difference is if the protocol is 'https' it will run on HTTPS server (this require SSL certificate) whereas if the protocol is http it will run on HTTP server.
There are a lot of examples of graceful stop for expressjs, how can I achieve the same for koajs?
I would like to disconnect database connections as well
I have a mongoose database connection, and 2 oracle db connection (https://github.com/oracle/node-oracledb)
I created an npm package http-graceful-shutdown (https://github.com/sebhildebrandt/http-graceful-shutdown) some time ago. This works perfectly with http, express and koa. As you want to add also your own cleanup stuff, I modified the package, so that you now can add your own cleanup function, that will be called on shutdown. So basically this package handles all http shutdown things plus calling your cleanup function (if provided in the options):
const koa = require('koa');
const gracefulShutdown = require('http-graceful-shutdown');
const app = new koa();
...
server = app.listen(...); // app can be an express OR koa app
...
// your personal cleanup function - this one takes one second to complete
function cleanup() {
return new Promise((resolve) => {
console.log('... in cleanup')
setTimeout(function() {
console.log('... cleanup finished');
resolve();
}, 1000)
});
}
// this enables the graceful shutdown with advanced options
gracefulShutdown(server,
{
signals: 'SIGINT SIGTERM',
timeout: 30000,
development: false,
onShutdown: cleanup,
finally: function() {
console.log('Server gracefulls shutted down.....')
}
}
);
I have answered a variation of "how to terminate a HTTP server" many times on different node.js support channels. Unfortunately, I couldn't recommend any of the existing libraries because they are lacking in one or another way. I have since put together a package that (I believe) is handling all the cases expected of graceful HTTP server termination.
https://github.com/gajus/http-terminator
The main benefit of http-terminator is that:
it does not monkey-patch Node.js API
it immediately destroys all sockets without an attached HTTP request
it allows graceful timeout to sockets with ongoing HTTP requests
it properly handles HTTPS connections
it informs connections using keep-alive that server is shutting down by setting a connection: close header
it does not terminate the Node.js process
Usage with Koa:
import Koa from 'koa';
import {
createHttpTerminator,
} from 'http-terminator';
const app = new Koa();
const server = app.listen();
const httpTerminator = createHttpTerminator({
server,
});
await httpTerminator.terminate();
To make sure the Oracle DB connections are closed nicely, you can use a connection pool and call pool.close() with a drainTime of 0 or greater. This will let the app relatively cleanly interrupt any operation that is currently using a connection. It allows freeing the DB end of the connections without the DB waiting for whatever timeout period to expire before it cleans itself up. Even with two connections this is a solution I'd look at, since it doesn't matter that the pool is small. You may need to set the Oracle Net out-of-band break detection as well, see Connections and High Availability.
Modern versions of node have support for AbortController, so no need for external libraries. A Simple example:
const app = new Koa();
const server = http.createServer(app.callback());
const controller = new AbortController();
server.listen({
host: 'localhost',
port: 80,
signal: controller.signal
});
// middleware... etc.
app.use(async (ctx) => {
ctx.body = 'Hello World';
});
// Later, when you want to close the server.
controller.abort();
Amazon S3 allows static website hosting, but with a requirement that the bucket name must match your domain name. This means your bucket name will look like: mydomain.com. Amazon S3 also provides a wildcard SSL certificate for *.s3.amazonaws.com. By the rules of TLS, this means com.s3.amazonaws.com IS covered by the certificate, but mybucket.com.s3.amazonaws.com is not. Node applications, like Knox that connect to *.com.s3.amazonaws.com should really be able to trust that certificate, even though it breaks the rules of TLS, since the knox library is a 'closed system': it only ever connects to an Amazon property.
The Node module https relies on tls.js, and tls.js has this function:
function checkServerIdentity(host, cert) {
...
// "The client SHOULD NOT attempt to match a presented identifier in
// which the wildcard character comprises a label other than the
// left-most label (e.g., do not match bar.*.example.net)."
// RFC6125
if (!wildcards && /*/.test(host) || /[.*].**/.test(host) ||
/*/.test(host) && !/*.*..+..+/.test(host)) {
return /$./;
}
Which will properly return a "Certificate Mismatch" error. Can the upper level Knox module override the checkServerIdentity function, which is several levels down and not called directly by Knox? I know how to override a function in a library I require, but not libraries that are included by these libraries.
There is a global cache for modules, which means that any function you override will be modified for all other modules. I think you can include tls yourself and patch checkServerIdentity:
// main.js
var tls = require('tls'),
mod = require('./mod.js');
tls.checkServerIdentity = function (host, cert) {
return true;
};
mod.test();
// mod.js
var tls = require('tls');
exports.test = function () {
console.log(tls.checkServerIdentity()); // true
};
If you don't want to make changes to the global module objects (per your comment on Nik's answer), maybe you could use the rewire module. I imagine doing it something like this:
var knoxModule = rewire("./node_modules/knox/somefile.js");
knoxModule.__set__("tls", {
checkServerIdentity: function (host, cert) {
// some code
}
});
I haven't ever worked with it though.