The MQTT Protocol Adapter fails to start with the warning message MemoryBasedConnectionLimitStrategy - Not enough memory in the log files. What does it explain?
The protocol adapter cannot allocate enough memory to reliably handle even a small number of connections. Please provide more memory.
If you want to ignore the warning and still use the protocol adapter with so little memory, you can configure the maximum number of concurrent connections (hono.mqtt.maxConnections) (see the Admin Guide for details).
Related
The documentation for CreateOptionsBuilder method.persistence indicates that setting this value as None will improve the performance, but ending up with a less reliable system.
Could someone elaborate on this? Please. Under which circumstances should I consider setting this to None?
The Eclipse Paho MQTT Rust Client Library is a "safe wrapper around the Paho C Library". The persistence options are mapped to values accepted by the C library with None becoming MQTTCLIENT_PERSISTENCE_NONE. The docs for the C client provide a more detailed explanation of the options:
persistence_type The type of persistence to be used by the client:
MQTTCLIENT_PERSISTENCE_NONE: Use in-memory persistence. If the device or system on which the client is running fails or is switched off, the current state of any in-flight messages is lost and some messages may not be delivered even at QoS1 and QoS2.
MQTTCLIENT_PERSISTENCE_DEFAULT: Use the default (file system-based) persistence mechanism. Status about in-flight messages is held in persistent storage and provides some protection against message loss in the case of unexpected failure.
MQTTCLIENT_PERSISTENCE_USER: Use an application-specific persistence implementation. Using this type of persistence gives control of the persistence mechanism to the application. The application has to implement the MQTTClient_persistence interface.
The upshot is that calling persistence(None) means that messages will be held in memory rather than being written to disk (assuming QOS1/2). This has the potential to improve performance (writing to disk can be expensive) but, because the info is only stored in memory, messages may be lost if your application shuts down without completing delivery.
A quick example might help (simplifying things a little); lets say you publish a message with QOS=1 and a network issue means that the broker does not receive it. When the connection is re-established (failed delivery will generally mean the connection will drop) the client will resend the message (because it has not processed an acknowledgment from the broker). With the default persistence (disk) the message will be retransmitted even if the failure was due to a power outage that affected the server your app was running on (obviously this only happens when power is restored and your app restarts); that message would be lost if you had called persistence(None).
The appropriate setting is going to depend upon your needs and other options may have an impact (e.g. if Clean Start/CleanSession is true then there unlikely to be any benefit to persisting to disk).
When you don't care if all messages are received. E.g. when using only QOS 0 messages
I'm working on an application where I need to ensure that even if the network goes down, messages will still arrive at their destination reliably, in-order, and unmodified. I've been using TCP, and up until now, I was just using a strategy of:
If a send/receive fails, do it again until no error.
If the remote disconnects, wait until the next connection and replace the socket I was send/receiving from with this new one (achieved through some threading and blocking to ensure it's swapped cleanly).
I recently realised that this doesn't work, as send can't report errors indicating that the remote hasn't received the message (cite eg. here).
I did also learn that TCP connections can survive brief network outages, as the kernel buffers the packets until the connection is declared dead after the timeout period (cite.
here).
The question: Is it a feasible strategy to just crank the timeout period waaaay higher on both client/server side (using setsockopt and the SO_KEEPALIVE options), so that a connection "never times out"? I'd have to handle errors related to the kernel's buffer filling up, but that should be relatively simple.
Are there any other failure cases?
If both ends doesn't explicitly disconnect, the tcp connection will stay open forever even if you unplug the cable. There is no timeout in TCP.
However, I would use (or design) an application protocol on top of tcp, making it possible to resume data transmission after re-connects. You may use HTTP for example.
That would be much more stable because depending on buffers would, as you say, at some time exhaust the buffers but the buffers would also being lost on let's say a power outage.
We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 ms. Is there a way to temporarily buffer all incoming network traffic during a short time period, either on the network card or via a process or buffer on our Redhat Linux box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application processing this traffic is stealing CPU cycles from the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Possible using Qos on the router, or using trickle to control your bandwidth by a sample configuration of :
/etc/trickled.conf.
see example in url.
I am not sure whether I understand your problem correctly. Your concern is sometimes you have priority to deal with output network traffic and at this time the incoming traffic will build up and finally might cause package drop or retransmission which you don't want. Therefore, you want to buffer your incoming traffic.
If my understanding is correct and your are using TCP, try to make your tcp buffer bigger.
http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html and then Use netstat to check whether your change is effective.
Adrian, have you tried setting the priority of your outgoing communication process to be higher than that of the process receiving the incoming data? Using the nice command this can be achieved. Note that in Unix/Linux the lower the number the higher the priority.
Otherwise I am not sure this is possible without having a direct tie in between the two applications that are sending / receiving, allowing you to effectively ignore the incoming connections that are ready to read from until any data you have is sent out.
I've recently read a lot about best practices with JMS, Spring (and TIBCO EMS) around connections, sessions, consumers & producers
When working within the Spring world, the prevailing wisdom seems to be
for consuming/incoming flows - to use an AbstractMessageListenerContainer with a number of consumers/threads.
for producing/publishing flows - to use a CachingConnectionFactory underneath a JmsTemplate to maintain a single connection to the broker and then cache sessions and producers.
For producing/publishing, this is what my (largeish) server application is now doing, where previously it was creating a new connection/session/producer for every single message it was publishing (bad!) due to use of the raw connection factory under JmsTemplate. The old behaviour would sometimes lead to 1,000s of connections being created and closed on the broker in a short period of time in high peak periods and even hitting socket/file handle limits as a result.
However, when switching to this model I am having trouble understanding what the performance limitations/considerations are with the use of a single TCP connection to the broker. I understand that the JMS provider is expected to ensure it can be used in the multi-threaded way etc - but from a practical perspective
it's just a single TCP connection
the JMS provider to some degree needs to co-ordinate writes down the pipe so they don't end up an interleaved jumble, even if it has some chunking in its internal protocol
surely this involves some contention between threads/sessions using the single connection
with certain network semantics (high latency to broker? unstable throughput?) surely a single connection will not be ideal?
On the assumption that I'm somewhat on the right track
Am I off base here and misunderstanding how the underlying connections work and are shared by a JMS provider?
is any contention a problem mitigated by having more connections or does it just move the contention to the broker?
Does anyone have any practical experience of hitting such a limit they could share? Either with particular message or network throughput, or even caused by # of threads/sessions sharing a connection in parallel
Should one be concerned in a single-connection scenario about sessions that write very large messages blocking other sessions that write small messages?
Would appreciate any thoughts or pointers to more reading on the subject or experience even with other brokers.
When thinking about the bottleneck, keep in mind two facts:
TCP is a streaming protocol, almost all JMS providers use a TCP based protocol
lots of the actions from TIBCO EMS client to EMS server are in the form of request/reply. For example, when you publish a message / acknowledge a receive message / commit a transactional session, what's happening under the hood is that some TCP packets are sent out from client and the server will respond with some packets as well. Because of the nature of TCP streaming, those actions have to be serialised if they are initiated from the same connection -- otherwise say if from one thread you publish a message and in the exact same time from another thread you commit a session, the packets will be mixed on the wire and there is no way server can interpret the right message from the packets. [ Note: the synchronisation is done from the EMS client library level, hence user can feel free to share one connection with multiple threads/sessions/consumers/producers ]
My own experience is multiple connections always output perform single connection. In a lossy network situation, it is definitely a must to use multiple connections. Under best network condition, with multiple connections, a single client can nearly saturate the network bandwidth between client and server.
That said, it really depends on what is your clients' performance requirement, a single connection under good network can already provides good enough performance.
Even if you use one connection and 100 sessions it means finally you
are using 100threads, it is same as using 10connections* 10 sessions =
100threads.
You are good until you reach your system resource limits
Trying to build a TCP server using Spring Integration in which keeps connections may run into thousands at any point in time. Key concerns are regarding
Max no. of concurrent client connections that can be managed as session would be live for a long period of time.
What is advise in case connections exceed limit specified in (1).
Something along the lines of a cluster of servers would be helpful.
There's no mechanism to limit the number of connections allowed. You can, however, limit the workload by using fixed thread pools. You could also use an ApplicationListener to get TcpConnectionOpenEvents and immediately close the socket if your limit is exceeded (perhaps sending some error to the client first).
Of course you can have a cluster, together with some kind of load balancer.