I am using socket.io for realtime functionality in my app with hapijs. When I am trying to add a listner on server side in a hapijs route and if I reload the same route/page 10 times or more then it starts showing me an error (node:9004) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 board_5a40a863a7fbf12cf8a7f1b8_element_sorted listeners added. Use emitter.setMaxListeners() to increase limit. you can also see error in attached screenshot.
I tried each of following codes to removeListners first and then add it back using socket.on('eventname', callback):
io.sockets.removeListner("eventname")
io.removeListner("eventname")
socket.removeListner("eventname")
io.sockets.removeAllListners()
io.removeAllListners()
socket.removeAllListners()
But I got an error everytime that removeAllListners/removeListner is not a function.
I also tried setting limit of maxlistners to unlimited by using each of following codes:
io.setMaxListeners(0);
io.sockets.setMaxListeners(0);
socket.setMaxListeners(0);
But I still kept getting the same error of memory leak detected. So can somebody tell me the solution for this. I would preferably like to follow the approach of removing the event listeners first and then setting it back. But I don't know which function do I need to call. :(
Also I want to know one more thing, is it a good approach create a new and unique event listener for every user rather than creating a common event listener of all the users?
For example I have a chat app with 1 million users.
Now in first approach I will have to create 1 million event listeners for 1 million users. So whenever there is a new message from a user then only those users will get the ping from the server who are chatting with that user.
In second approach I will have to create 1 common event listener for all users, but now server will have to ping 1 million users now and I will have to parse the message received every time on client end and check whether the message is for me or for somebody else.
According to me 2nd approach should not be a good approach because of security issues as there are chances of a message being received by a wrong/unauthorized user.
But still I am not sure which one to follow.
So, can anyone guide me on this thing well. Any help is appreciated.
Related
I'm having issues with Node.js and the "WS" implementation of websocket (https://www.npmjs.com/package/ws). After a surge (plenty of messages in a short window of time), I'm having data that suggests that I've "missed" a message.
I've contacted the owner of the emitter server and he assures me that all messages have been sent on his side.
I've logger every message received on my side (at the start of the function on('message', () => {}), and I can't seem to find the message, so my assumption is that it doesn't even reached this point
So I'm wondering:
Messages are reveived and treated in a FIFO order. During the treatment of the current message, new ones will be stacked in the node event loop to be computed immediatly after. Correct ? Is there a way for that event loop to be "too big" that may drop new incomming messages ? If so, does it drop it quietly ? or does the program crashes vigorously (in other words, how can I see if a message has been dropped this way ?)
Does the 'ws' module have any kind of kown limitations for a maximum number of message received ? Does it have an internal way of dropping messages ?
Is there a better alternative than the 'ws' module ?
Is there any other ways to explain a "missed" message ?
Thanks a lot for your insights,
I use ws in nodejs to handle large message flows from many clients simultaneously in production, and I have never had it lose messages. Each server handles several thousand messages each second from hundreds of different client connections. The way my system works, if ws dropped messages or changed their order, my users would complain loudly.
That makes me guess you are not hitting any limitation of ws.
Early in my programming work I had the not-so-bright idea of putting incoming messages in queue objects in my nodejs code and processing them "later." That led to a hideously confusing message flow through my server. It sometimes looked like I had lost ws messages. I was happy to delete all that code, and dispatch every message completely within its message event handler.
Websocket connections sometimes close abnormally. Because network. You can catch those situations with error and close event handlers. It can take a while for the sender of a message, or the receiver, to detect that a network fault of some kind disrupted its connection. That can lead to disagreement about message count between sender and receiver. It's worth investigating.
I adorn ws's connection objects with message counts ("adorn" -- add an application-specific property to an object) and put those message counts into the log when a connection closes.
I'm developing an API for sending SMS with an Http request. I use node js and mongoose. So I have a problem like the one with multi thread application.
The fact is that when a user send a sms, I verify the number of sms he has already sent in database (using mongoose) and if the number doesn't exceed a limit his sms is sent and the number of sms he has sent is increment in the database (there is a value for the number of sms he has sent in the hour,day,week and month in the schema). But the fact is that I use a callbacks for the process of read value and increment value and many other operation in my code.
So the problem (I think) is that when user send requests very quickly the server different callbacks read the same count of the sms sent, authorize user to sent sms, increment and save the same value so that the count of sms is false.
In a multi thread application that access to a variable the solution would be to prevent other threads to read a variable before the actual thread has done all of it works.
With Node js event system and access to data in mongoDB I just don't know how to solve my problem.
Thank you in advance for the answers.
PS: I don't know the solution but it will be good if it works also with clusters that allow node js to use multi core.
I think you should try some cache approach.
now I meet same situation with you.
I will try to use cache to store the record_id that is in process.
When new request come, the coming process need check cache. If the record_id is in cache that means that record is using by other thread. So that thread need wait or do something else until finish. And when the process finish that will remove the record_id in cache in callback function
Thanks Cristy, I have solved the main part of my problem using async queue.
My application works well when I run it the default way of node js.
But there is an other problem. I intend to run my code on a server that has 4 cores so I want to use the node cluster module. But when I used this... because it runs code like 4 differents process (I used a server with 4 cores) they use differents queues and the error I mention earlier always occured, they read and write to the database without waiting for other thread to finish processing verifications + update.
So I would like to know what should I do to have an optimal and fast application.
Should I stop to use the cluster module and don't take benefit of multi core server (I don't think it is the best answer)?
Should I store it in my mongodb (maybe try to not persist the queue but store it in the memory in other to make it faster) ?
Is there a way to share the queue in the code when I use cluster?
What is my best choice?
I'm writing bot for telegram to gather some stats from group chat. I need to get info about every message (from the beginning of chat). I know how can i do it, but it's a quite bad idea. I can use forwardMessage method, but i need second acc for it and i'm getting timeouted when i'm sending messages too fast (for one hour), so it's a bit long way to collect stats for conversation that has over 2 million messages ;s I tried to set limit on 10 messages per second but i'm still getting timeouted, so idk how it works.
There must be other way to get JUST message info by id without forwarding it ;v I can't find it in API.
There has no API to do this at this time, you can suggest this idea to #BotSupport, before them added this feature, I am doing same thing like you.
According to Bot FAQ, Telegram API rate limit 1/s pre chat, and global limit is 30/s.
There is no way to do this with Telegram bot api, you can use ReadHistory Method of MadelineProto without the necessity to use forward message method
Problem
We are developing a Azure Service Bus based Cloud Service, but after 24 hours the queue clients seem to get closed automatically.
Can someone confirm this behavior or give advise how to fix it?
At the moment we close the clients after 24 hours manually and recreate them to avoid this effect, but this can't be the only solution.
Sessions dropping intermittently is a normal occurrence. The AMQP protocol and stack in the client is newer and generally more resilient against this. The only reason not to use AMQP is if you are using transactions. Also, unless you have a good reason to run your own receive loop, use OnMessage.
You are getting ‘OperationCanceledException’ when the link fails for any reason and any in-flight requests will fail with this exception. However, this is transient, so you should be able to reuse the same QueueClient to issue receives and those should (eventually) work as the client recovers. OnMessage will hide all of that from you.
It seems that new azure SDK extends the visibilitytimeout to <= 7 days. I know by default, when I add a message to an azure queue, the live time is 7days. When I get message out, and set the visibilitytimeout to 7days. Does that mean I don't need to delete this message if I don't care about message reliable? the message will disappear later 7 days.
I want to take this way because DeleteMessage is very slow. If I don't delete message, doesn't it have any impact on performance of GetMessage?
Based on the documentation for Get Messages, I believe it is certainly possible to set the VisibilityTimeout period to 7 days so that messages are fetched only once. However I see some issues with this approach instead of just deleting the message once the process is done:
What happens when you get the message and start processing it and somehow the process fails? If you set the visibility timeout to be 7 days, then the message would never appear in the queue again and thus the process it was supposed to do never gets done.
Even though the message is hidden, it is still there in the queue thus you keep on incurring storage charges for that message. Even though the cost is trivial but why keep the message when you don't really need it.
A lot of systems rely on Approximate Messages Count property of a queue to check on the health of processes which are performed by messages in a queue. Please note that even though you make the message hidden, it is still there in the queue and thus will be included in total messages count in the queue. So if you're building a system which relies on this for health check, you will always find your system to be unhealthy because you're never deleting the messages.
I'm curious to know why you find deleting messages to be very slow. In my experience this is quite fast. How are you monitoring message deletion?
Rather than hacking around the problem I think you should drill into understanding why the deletes are slow. Have you enabled logs and looked at the e2elatency and serverlatency numbers across all your queue operations. Ideally you shouldn't be seeing a large difference between the two for all your queue operations. If you do see a large difference then it implies something is happening on the client that you should investigate further.
For more information on logging take a look at the following articles:
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/
http://msdn.microsoft.com/en-us/library/azure/hh343262.aspx
Information on client side logging can also be found in this post: e client side logging – which you can learn more about in this blog post. http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/announcing-storage-client-library-2-1-rtm.aspx
Please let me know what you find.
Jason