I'm now using Android as my Netty client. And Windows as my Netty Server.
Recently, I find a strange behavior on Netty.
When I open the server-side app, the memory is only 30MB.
But after several hours, it rises up to 300M. Its 10x compared to the original memory usage.
The longer I open the server, the more memory it will increase.
I don't know why this is happening. Is it normal?
By the way, since Netty doesn't support built-in server push feature.
So I use a static method to store all of the Channel in the map:
public static final Map<Integer, Channel> mapConcurrentIdChannel = new ConcurrentHashMap<Integer, Channel>();
I map the channel ID to Channel.
For example: Whenever client A wants to push message to client B, the server will find the channel id and thus get the Channel instance, then use Channel.write(object) method.
Is this a correct method to implement Push Message feature in Netty?
(If not, could you please suggest a good method to implement Push feature? Since there's no official docs mention that)
Also, I'm afraid this implementation causes "Memory Leak Problem" which I explain earlier.
About using ChannelGroup:
My scenario is if there are 5 people, A, B, C, D, E. Sometimes, A wants to send message to C, and sometimes B wants to send message to E.
I can't predict when somebody will send message to somebody and who they will send to. So I can't add all the 5 people(connection) to the ChannelGroup, which writing to that group would broadcast the message to everyone.
I've searched on Google for a long time and nothing help on my problem I'm now facing.
Wish to hear some recommendations from Netty experienced developers, you!!
Thanks!!
I think you want to use ChannelGroup[1] for this which is basically also just use a ConcurrentMap put ensure the Channel is removed when it is closed etc.
[1] http://netty.io/3.6/api/org/jboss/netty/channel/group/DefaultChannelGroup.html
Related
The documentation for CreateOptionsBuilder method.persistence indicates that setting this value as None will improve the performance, but ending up with a less reliable system.
Could someone elaborate on this? Please. Under which circumstances should I consider setting this to None?
The Eclipse Paho MQTT Rust Client Library is a "safe wrapper around the Paho C Library". The persistence options are mapped to values accepted by the C library with None becoming MQTTCLIENT_PERSISTENCE_NONE. The docs for the C client provide a more detailed explanation of the options:
persistence_type The type of persistence to be used by the client:
MQTTCLIENT_PERSISTENCE_NONE: Use in-memory persistence. If the device or system on which the client is running fails or is switched off, the current state of any in-flight messages is lost and some messages may not be delivered even at QoS1 and QoS2.
MQTTCLIENT_PERSISTENCE_DEFAULT: Use the default (file system-based) persistence mechanism. Status about in-flight messages is held in persistent storage and provides some protection against message loss in the case of unexpected failure.
MQTTCLIENT_PERSISTENCE_USER: Use an application-specific persistence implementation. Using this type of persistence gives control of the persistence mechanism to the application. The application has to implement the MQTTClient_persistence interface.
The upshot is that calling persistence(None) means that messages will be held in memory rather than being written to disk (assuming QOS1/2). This has the potential to improve performance (writing to disk can be expensive) but, because the info is only stored in memory, messages may be lost if your application shuts down without completing delivery.
A quick example might help (simplifying things a little); lets say you publish a message with QOS=1 and a network issue means that the broker does not receive it. When the connection is re-established (failed delivery will generally mean the connection will drop) the client will resend the message (because it has not processed an acknowledgment from the broker). With the default persistence (disk) the message will be retransmitted even if the failure was due to a power outage that affected the server your app was running on (obviously this only happens when power is restored and your app restarts); that message would be lost if you had called persistence(None).
The appropriate setting is going to depend upon your needs and other options may have an impact (e.g. if Clean Start/CleanSession is true then there unlikely to be any benefit to persisting to disk).
When you don't care if all messages are received. E.g. when using only QOS 0 messages
I thought my question was related to post "Azure Service Bus: How to Renew Lock?" but I have tried the RenewLockAsync.
Here is the concern, I am receiving messages from the ServBus with Sessions enabled so I get the session then receive messages. All good, here's the Rub.
There are TWO ADDITIONAL processes to complete per message. A manual transform / harvest of the message into some other object which is then sent out to a Kafka topic (stream). Note its all Async on top of this craziness. My team lead is insistent that the two sub processes can just be added INTO the receive process (ReceiveAsync) and finally call session.CompleteAsync() AFTER the OTHER two processes complete.
Well needles to say I'm consistently erroring with "The session lock has expired on the MessageSession. Accept a new MessageSession." with that architecture. I haven't even fleshed out the send to Kafka part its just mocked so its going to take longer once fleshed out.
Is it even remotely plausible to session.CompleteAsync() AFTER the sub processes or shouldn't that be done when the message is successfully received, then move on to other processing? I thought separate tasks would be more appropriate but again he didn't dig that idea..
I appreciate all insight and opinions thank you !
"The session lock has expired on the MessageSession. Accept a new MessageSession." indicates one of 2 things:
The lock has been open for too long, in which case calling "RenewLockAsync" before it expires would help.
The message lock has been explicitly released, through a call to CompleteAsync, AbandonAsync, DeadLetterAsync, etc. That would indicate a bug, since the lock can not be used after it has been released
I'm working on an application where I want to use ZeroMQ to connect nodes of different types which may be added and removed while the system is running. This means that I want to call zmq_connect() or zmq_disconnect() at any time as nodes come and go.
Some connection use sockets of type ZMQ_REQ, which block when no peers are available. Thus, it may happen that one node is blocked in a zmq_recv(), without any node available for processing the request. If then a new node becomes available, I would like to connect the socket using zmq_connect(). The only way I can see how I could do that is to call zmq_connect() from a different thread. But the documentation states pretty clearly that zmq_socket instances cannot be used from multiple threads simultaneously.
How can I solve this problem, sending messages on a ZMQ_REQ socket without any connections (or connection which cannot be established) and then later add connections and have the waiting requests being processed?
You should not use zmq_recv() when no messages are ready. That way you avoid blocking your thread. Instead check that there indeed are a message to receive. The easiest way to achieve this is using a poller. Since you haven't stated which library or language you're using I can't give you the right example, but I guess C example from the ZeroMQ Guide's examples here could be of use.
Building ZeroMQ based applications is, in my experience, most effective by building one threaded nodes that reacts to messages and, if necessary, runs methods based on time intervals.
For building a system like you talk about I suggest you look at the Service Discovery chapter of the awesome ZeroMQ Guide.
We are using socketIO on a large chat application.
At some points we want to dispatch "presence" (user availability) to all other users.
io.in('room1').emit('availability:update', {userid='xxx', isAvailable: false});
room1 may contains a lot of users (500 max). We observe a significant raise in our NodeJS load when many availability updates are triggered.
The idea was to use something similar to redis store with Socket IO. Have web browser clients to connect to different NodeJS servers.
When we want to emit to a room we dispatch the "emit to room1" payload to all other NodeJS processes using Redis PubSub ZeroMQ or even RabbitMQ for persistence. Each process will itself call his own io.in('room1').emit to target his subset of connected users.
One of the concern with this setup is that the inter-process communication may become quite busy and I was wondering if it may become a problem in the future.
Here is the architecture I have in mind.
Could you batch changes and only distribute them every 5 seconds or so? In other words, on each node server, simply take a 'snapshot' every X seconds of the current state of all users (e.g. 'connected', 'idle', etc.) and then send that to the other relevant servers in your cluster.
Each server then does the same, every 5 seconds or so it sends the same message - of only the changes in user state - as one batch object array to all connected clients.
Right now, I'm rather surprised you are attempting to send information about each user as a packet. Batching seems like it would solve your problem quite well, as it would also make better use of standard packet sizes that are normally transmitted via routers and switches.
You are looking for this library:
https://github.com/automattic/socket.io-redis
Which can be used with this emitter:
https://github.com/Automattic/socket.io-emitter
About available users function, I think there are two alternatives,you can create a "queue Users" where will contents "public data" from connected users or you can use exchanges binding information for show users connected. If you use an "user's queue", this will be the same for each "room" and you could update it when an user go out, "popping" its state message from queue (Although you will have to "reorganize" all queue message for it).
Nevertheless, I think that RabbitMQ is designed for asynchronous communication and it is not very useful approximation have a register for presence or not from users. I think it's better for applications where you don't know when the user will receive the message and its "real availability" ("fire and forget architectures"). ZeroMQ require more work from zero but you could implement something more specific for your situation with a better performance.
An publish/subscribe example from RabbitMQ site could be a good point to begin a new design like yours where a message it's sent to several users at same time. At summary, I will create two queues for user (receive and send queue messages) and I'll use specific exchanges for each "room chat" controlling that users are in each room using exchange binding's information. Always you have two queues for user and you create exchanges to binding it to one or more "chat rooms".
I hope this answer could be useful for you ,sorry for my bad English.
This is the common approach for sharing data across several Socket.io processes. You have done well, so far, with a single process and a single thread. I could lamely assume that you could pick any of the mentioned technologies for communicating shared data without hitting any performance issues.
If all you need is IPC, you could perhaps have a look at Faye. If, however, you need to have some data persisted, you could start a Redis cluster with as many Redis masters as you have CPUs, though this will add minor networking noise for Pub/Sub.
I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library