What i'm trying to do from my tests is to simulate ETIMEDOUT that should be caught by socket.on('error', () => {...}). In real word with 3rd party TCP server that im using, ETIMEDOUT is always caught by error event. Would like to mimic this situation in my tests also. Going through the tls docs, only candidate that could be used for this purpose is socket.setTimeout but it does not work how i would expect it:
describe('TCP timeout', () => {
const TIMEOUT_AFTER_IN_MILLISECONDS = 1
const socket = getActiveSocketFromSomewhere()
it('should simulate timeout', () => {
socket.setTimeout(TIMEOUT_AFTER_IN_MILLISECONDS, () => {
console.log('are we here')
})
/**
* This will trigger socket communication
* with dummy TCP server where socket from
* above will be used
*/
return something()
...
})
})
From console i can see that i was waiting for answer for 14ms and that callback provided to setTimeout was executed, but after i can see that i received response from TCP server:
are we here
{ result: 'success', ... }
Yes, That's how the behavior is.
When the timeout is reached, the socket is not ended explicitly. It is clearly mentioned in the document and you'll receive a response whenever it is destined
https://nodejs.org/dist/latest-v6.x/docs/api/net.html#net_socket_settimeout_timeout_callback
If you want to end the socket, you need to manually call socket.end() or socket.destroy() after timeout event is triggered
Code:
socket.on('timeout',function(){
socket.end();
})
Related
I have a node js service that consumes messages from Kafka and processes it through various steps of transformation logic. During the processing, services use Redis and mongo for storage and caching purposes. In the end, it sends the transformed message to another destination via UDP packets.
On startup, it starts consuming message from Kafka after a while, it crashes down with the unhandled error: ERR_CANNOT_SEND unable to send data(see below picture).
restarting the application resolves the issue temporarily.
I initially thought it might have to do with the forwarding through UDP sockets, but the forwarding destinations are reachable from the consumer!
I'd appreciate any help here. I'm kinda stuck here.
Consumer code:
const readFromKafka = ({host, topic, source}, transformationService) => {
const logger = createChildLogger(`kafka-consumer-${topic}`);
const options = {
// connect directly to kafka broker (instantiates a KafkaClient)
kafkaHost: host,
groupId: `${topic}-group`,
protocol: ['roundrobin'], // and so on the other kafka config.
};
logger.info(`starting kafka consumer on ${host} for ${topic}`);
const consumer = new ConsumerGroup(options, [topic]);
consumer.on('error', (err) => logger.error(err));
consumer.on('message', async ({value, offset}) => {
logger.info(`recieved ${topic}`, value);
if (value) {
const final = await transformationService([
JSON.parse(Buffer.from(value, 'binary').toString()),
]);
logger.info('Message recieved', {instanceID: final[0].instanceId, trace: final[1]});
} else {
logger.error(`invalid message: ${topic} ${value}`);
}
return;
});
consumer.on('rebalanced', () => {
logger.info('cosumer is rebalancing');
});
return consumer;
};
Consumer Service startup and error handling code:
//init is the async function used to initialise the cache and other config and components.
const init = async() =>{
//initialize cache, configs.
}
//startConsumer is the async function that connects to Kafka,
//and add a callback for the onMessage listener which processes the message through the transformation service.
const startConsumer = async ({ ...config}) => {
//calls to fetch info like topic, transformationService etc.
//readFromKafka function defn pasted above
readFromKafka( {topicConfig}, transformationService);
};
init()
.then(startConsumer)
.catch((err) => {
logger.error(err);
});
Forwarding code through UDP sockets.
Following code throws the unhandled error intermittently as this seemed to work for the first few thousands of messages, and then suddenly it crashes
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({id, host, port}) => {
return new Promise((resolve) => {
dgram.createSocket('udp4').send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Based on our comment exchange, I believe the issue is just that you're running out of resources.
Throughout the lifetime of your app, every time you send a message you open up a brand new socket. However, you're not doing any cleanup after sending that message, and so that socket stays open indefinitely. Your open sockets then continue to pile up, consuming resources, until you eventually run out of... something. Perhaps memory, perhaps ports, perhaps something else, but ultimately your app crashes.
Luckily, the solution isn't too convoluted: just reuse existing sockets. In fact, you can just reuse one socket for the entirety of the application if you wanted, as internally socket.send handles queueing for you, so no need to do any smart hand-offs. However, if you wanted a little more concurrency, here's a quick implementation of a round-robin queue where we've created a pool of 10 sockets in advance which we just grab from whenever we want to send a message:
const MAX_CONCURRENT_SOCKETS = 10;
var rrIndex = 0;
const rrSocketPool = (() => {
var arr = [];
for (let i = 0; i < MAX_CONCURRENT_SOCKETS; i++) {
let sock = dgram.createSocket('udp4');
arr.push(sock);
}
return arr;
})();
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({ id, host, port }) => {
return new Promise((resolve) => {
var sock = rrSocketPool[rrIndex];
rrIndex = (rrIndex + 1) % MAX_CONCURRENT_SOCKETS;
sock.send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Be aware that this implementation is still naïve for a few reasons, mostly because there's still no error handling on the sockets themselves, only on their .send method. You should look at the docs for more info about catching events such as error events, especially if this is a production server that's supposed to run indefinitely, but basically the error-handling you've put inside your .send callback will only work... if an error occurs in a call to .send. If between sending messages, while your sockets are idle, some system-level error outside of your control occurs and causes your sockets to break, your socket may then emit an error event, which will go unhandled (like what's happening in your current implementation, with the intermittent errors that you see prior to the fatal one). At that point they may now be permanently unusable, meaning they should be replaced/reinstated or otherwise dealt with (or alternatively, just force the app to restart and call it a day, like I do :-) ).
The App crashes if I do not call client.close in the program below.
The send of message and receiving of data works fine. But if I exit the function and comes back to it again later, the App crashes and I can not receive message anymore. I restart the Smart Phone and it works again only for the first time the function runs.
If I put client.close() inside the client.on('message',, I only get the first data from the host or source, because the socket will close prematurely. Also the App do not crash.
If I remove the client.close(), I get all the data from multiple sources saved in the array I provided let RawMessageUDP = [].
Also I confirmed that the callback function of the client.on('message', will not be executed when there are no more message in the socket.
How can I determine that there are no more message in the socket, so I can close it?
There are two hosts which receives the message and reply data string back to this App. There are no issues in the host I confirmed since they close the connection after sending.
Send_UDP_Multicast = async () => {
const message = Buffer.from('Some bytes');
const client = dgram.createSocket('udp4');
let RawMessageUDP = []
let countMessage = 0
client.on('error', (err) => {
console.log(err.stack)
client.close()
})
client.on('message', (data, rinfo) => { //Console: socket-x, bound to address: 0.0.0.0, port: 65000 max
RawMessageUDP[countMessage] = data.toString()
console.log('Receiving remote data.' + RawMessageUDP[countMessage])
countMessage++
//client.close()
})
client.send(message, 0, message.length, 1900, '239.255.255.250', (err) => {
if (err) {
console.log(err);
client.close();
}
})
}
https://github.com/jurniores/SocketUDP resolve your problem. This lib you will find how to disconnect the clients and if the clients fall down it will disconnect them and will not overload your server.
Im having this alot of http petitions (6k INSIDE LAGGING) in 1-3 minutes in the console when i receive or send data to a socketio connection.
Im using node+express in the backend and vue on the front
Backend:
app.js
mongoose.connect('mongodb://localhost/app',{useNewUrlParser:true,useFindAndModify:false})
.then(result =>{
const server = app.listen(3000)
const io = require('./sockets/socket').init(server)
io.on('connection', socket =>{
// console.log('client connected')
})
if(result){console.log('express & mongo running');
}
})
.catch(error => console.log(error))
I created a io instance to use it on the routes
let io
module.exports = {
init: httpServer => {
io = require('socket.io')(httpServer)
return io;
},
getIo:()=>{
if(!io){
throw new Error('socket io not initialized')
}
return io;
}
}
Then, on the route, depending of the logic, the if,else choose what type socket response do
router.post('/post/voteup',checkAuthentication, async (req,res)=>{
//some logic
if(a.length <= 0){
io.getIo().emit('xxx', {action:'cleanAll'})
}
else if(b.length <= 0){
io.getIo().emit('xxx', {action:'cleanT',datoOne})
}
else{
io.getIo().emit('xxx', {action:'cleanX',dataTwo,dataOne,selected})
}
res.json({ serverResponse:'success'})
})
In the front (component) (activated with beforeUpdate life cycle hook)
getData(){
let socket = openSocket('http://localhost:3000')
socket.on('xxx', data => {
if(data.action === 'cleanX'){
if(this.selected === data.selected){
this.ddd = data.dataTwo
}
else if(!this.userTeamNickname){
this.qqq= data.dataOne
}
}
else if(data.action === 'cleanAll'){
this.ddd= []
this.qqq= []
}
else if(data.action === 'cleanT'){
this.ddd= data.dataOne
}
})
},
1. What kind of behavior can produce this such error?
2. Is any other most efficient way to do this?
It looks like socket.io is failing to establish a webSocket connection and has never advanced out of polling. By default, a socket.io connection starts with http polling and after a bit of negotiation with the server, it attempts to establish a webSocket connection. If that succeeds, it stops doing the polling and uses only the webSocket connection. If the the webSocket connection fails, it just keeps doing the polling.
Here are some reasons that can happen:
You have a mismatched version of socket.io in client and server.
You have some piece of infrastructure (proxy, firewall, load balancer, etc...) in between client and server that is not letting webSocket connections through.
You've attached more than one socket.io server handler to the same web server. You can't do that as the communication will get really messed up as multiple server handlers try to respond to the same client.
As a test, you could force the client to connect only with webSocket (no polling at all to start) and see if the connection fails:
let socket = io(yourURL, {transports: ["websocket"]})'
socket.on('connect', () => {console.log("connected"});
socket.on('connect_error', (e) => {console.log("connect error: ", e});
socket.on('connect_timeout', (e) => {console.log("connect timeout: ", e});
I am trying to 'gracefully' close a net.Server instance (created with app.listen()) if an un-handled error is thrown. Server creation occurs in my bin/www script. All error handling and routing middleware configuration is defined in index.js.
In my application configuration module (index.js) I have error handling middleware that checks that each error is handled. If the error is not handled then a 'close' event is emitted.
Note: Each req and res is wrapped in a domain. I am using express-domain-middleware middleware module to listen for error events on each req domain and route the error to my error handling. I only mention this in case it might be the culprit.
The 'close_server' event handler should:
Close the server so new connections are not accepted.
Close the current process once all open connections have completed.
If after 10 seconds the server has not closed, force the process to close.
The optional callback provided to server.close() never seems to be invoked and I'm not sure why. To test this I am making a single request which throws an error. The process is only closed after the timer expires (10 seconds has elapsed).
Could there be something holding open a connection in the server? Why is the server.close() callback never called?
Thanks!
Update
I was using Chrome to make a request to the server. It appears that the browser is holding open a connection. If I make the request using curl it works as expected.
See this issue
index.js
app.use(function (err, req, res, next) {
if (typeof err.statusCode !== 'undefined') {
if (err.statusCode >= 500) {
Logger.error({error: err});
return next(err);
} else {
Logger.warn({warn: err});
return next(err);
}
} else {
//The error is un-handled and the server needs to go bye, bye
var unhandledError = new UnhandledError(util.format('%s:%s', req.method, req.originalUrl), 'Server shutting down!', err.stack, 500);
Logger.fatal({fatal: unhandledError});
res.status(500).send('Server Error');
app.emit('close_server', unhandledError);
}
});
bin/www
#!/usr/bin/env node
var app = require('../index');
var port = config.applicationPort;
app.set('port', port);
var server = app.listen(app.get('port'));
/*
* Wait for open connections to complete and shut server down.
* After 10 seconds force process to close.
* */
app.on('close_server', function () {
server.close(function () {
console.log('Server Closed.');
process.exit()
});
setTimeout(function () {
console.log('Force Close.');
process.exit()
}, 10 * 1000);
});
server.close() does not close open client connections, it only stops accepting new connections.
So most likely Chrome is sending a request with Connection: keep-alive which means the connection stays open for some time for efficiency reasons (to be able to make multiple requests on the same connection), whereas curl is probably using Connection: close where the connection is severed immediately after the server's response.
As #mscdex mentioned server.close never runs its callback when the browser sends the request Connection: keep-alive, because server.close only stops the server from accepting new connections.
Node v18.2.0 introduced server.closeAllConnections() and server.closeIdleConnections()
server.closeAllConnections() closes all connections connected to the server, and server.closeIdleConnections() closes all connections connected to the server but only the ones which are not sending a request or waiting for a response.
Before Node v18.2.0 I tackled this problem by waiting 5 seconds for the server to shutdown, after which it would force exit.
The following code contemplates both situations
process.on('SIGINT', gracefulShutdown)
process.on('SIGTERM', gracefulShutdown)
function gracefulShutdown (signal) {
if (signal) {
console.log(`\nReceived signal ${signal}`)
}
console.log('Gracefully closing http server')
// closeAllConnections() is only available after Node v18.02
if (server.closeAllConnections) server.closeAllConnections()
else setTimeout(() => process.exit(0), 5000)
try {
server.close(function (err) {
if (err) {
console.error(err)
process.exit(1)
} else {
console.log('http server closed successfully. Exiting!')
process.exit(0)
}
})
} catch (err) {
console.error('There was an error', err)
setTimeout(() => process.exit(1), 500)
}
}
Using node.js http.createServer to listen POST requests. If request completes fast, all works good. If request complete time > 5 seconds i get no response returned to client.
Added event listeners on created sockets:
server.on('connection', function(socket) {
log.info('SOCKET OPENED' + JSON.stringify(socket.address()));
socket.on('end', function() {
log.info('SOCKET END: other end of the socket sends a FIN packet');
});
socket.on('timeout', function() {
log.info('SOCKET TIMEOUT');
});
socket.on('error', function(error) {
log.info('SOCKET ERROR: ' + JSON.stringify(error));
});
socket.on('close', function(had_error) {
log.info('SOCKET CLOSED. IT WAS ERROR: ' + had_error);
});
});
Got those messages on middle of request (after about 10 sec after start):
info: SOCKET TIMEOUT
info: SOCKET CLOSED. IT WAS ERROR: false
But on client socket do not get closed, so client wait for response. End of request completed with success, response sent (on closed socket!), but client still wait.
No idea how to block those timeouts. Removed all timeouts from code. Tried to add KeepAlive, no result.
socket.setKeepAlive(true);
How to prevent socket bultin timeout?
From node's setTimeout documentation the connection will not be severed when a timeout occurs. That's why the browser's still waiting. You're still required to end() or destroy() the socket. You can increase the timeout by calling setTimeout on the socket.
socket.setTimeout(1000 * 60 * 300); // 5 hours
You can do a couple of things:
socket.setTimeout(/* number of milliseconds */);
If you do this, then the server can get a timeout event:
server.on('timeout', function(timedOutSocket) {
timedOutSocket.write('socket timed out!');
timedOutSocket.end();
});