Maximum no. of concurrent connections exceeded for SmtpClient - c#-4.0

I have a scenario, An MSMQ queuing system...Records are queued on a timely basis...A WCF listener that listens to the queue, starts processing the records as soon as the records are queued up...does some process and sends email after that(There are 10 queues and 10 listeners out of which 3 listeners are responsible for email sending). The problem I am facing is the email sending part, where larger data is queued up then for some records I get the following error
Service not available, closing transmission channel. The server response was: 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel
The class that sends email is
public class A
{
//Method is static as it is a common method used by other processes running in parallel
public static void SendMail()
{
MailMessage mail = new MailMessage();
SmtpClient client = new SmtpClient();
///Email information goes here
client.Send(mail);
}
}
I guess even my method is static the smtp object is instantiated each time which causes the problem. Even if I increase the concurrent connections it is not solving the problem. I have a couple of workaround but required some more light on this.
Limit the no. of concurrent connections to let say 100. So even I have 1000 records queued up and the listeners starts processing them in parallel the smtp process will not use more than 100 connections at a time, wait for it to complete and then take up the next 100 and so on. But I am not sure how to do this.
Use parallel foreach loop or SmtpClient.SendAsync method, but here also my proficiency level is not much regarding these methods, so I am a little bit afraid(I need to make sure there is no major performance hit).
So just needed a stable and better approach to solve this.

Related

Azure Service Bus Send throughput .Net SDK

I am currently implementing a library to send the messages faster to the Service bus queue. What is observed is that, if I used the same ServiceBusClient and use the same sender to send the messages in Parallel.For, the throughput is not so high and my network upload speed is not fully utilized. The moment I make individual clients and use them to send, the throughput increases drastically and even utilizes my upload bandwidth very well.
Is my understanding correct or a single client-sender must do? Also, I am averse to create multiple clients as it will use a lot of resources to establish the client connection. Any articles that throw some light on this?
There is a throughput test tool and its code also creates multiple client.
protected override Task OnStartAsync()
{
for (int i = 0; i < this.Settings.SenderCount; i++)
{
this.senders.Add(Task.Run(SendTask));
}
return Task.WhenAll(senders);
}
async Task SendTask()
{
var client = new ServiceBusClient(this.Settings.ConnectionString);
ServiceBusSender sender = client.CreateSender(this.Settings.SendPath);
var payload = new byte[this.Settings.MessageSizeInBytes];
var semaphore = new DynamicSemaphoreSlim(this.Settings.MaxInflightSends.Value);
var done = new SemaphoreSlim(1);
done.Wait();
long totalSends = 0;
https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance
Is there a library to manage the connections in a pool?
From the patterns in your code, I'm assuming that you're using the Azure.Messaging.ServiceBus package. If that isn't the case, please ignore the remainder of this post.
ServiceBusClient represents a single AMQP connection to the service. Any senders, receivers, and processors spawned from this client will share that connection. This gives your application the ability to control the number of connections used and pool them in the manner that works best in your context.
It is recommended to reuse clients, senders, receivers, and processors for the lifetime of your application; though the connection is shared, each time a new child type is spawned, it must establish a new AMQP link and perform the authorization handshake - which is non-trivial overhead.
These types are self-managing with respect to resources. For idle periods, connections and links will be closed to avoid waste, and they'll automatically be recreated for the first operation that requires them.
With respect to using multiple clients, senders, receivers, and processors - it is a valid approach and can yield better performance in some scenarios. The one caveat that I'll mention is that using more clients than the number of CPU cores in your host environment comes with an increased risk of causing contention in the thread pool. The Service Bus library is highly asynchronous, and its performance relies on continuations for async calls being scheduled in a timely manner.
Unfortunately, performance tuning is very difficult to generalize due to how much it varies for different application and hosting contexts. To find the right number of senders to maximize throughput for your application, we recommend that you spend time testing different values and observing the performance characteristics in your specific system.
For the new SDK, the same principle of connection management is applied i.e., re-creating connections is expensive.
You can connect client objects directly to the bus or by creating a ServiceBusConnection, can share a single connection between client
This is for the scenario to send as many messages as possible to a single queue then you can increase throughput by spinning up multiple ServiceBusConnection and client objects on separate threads.
Is there a library to manage the connections in a pool?
There’s no connection pooling happening under the hood and new connections are relatively expensive to create. With the previous SDK the advice was to re-use factories and clients where possible.
Refer this article for more information.

Nodejs, multiple requests that takes time

i have a question that is keeping me busy and i wondered if anyone might have the answer.
Lets assume that i have an express app that listens for a post request,
This post request triggers a function in the program that calls over 1000 times with axios to a service (for that matter an sms provider that with the api call sends an sms message).
Now, assuming that one request takes 200 ms for the SMS provider, 1000 of them will take over 2 minutes.
Of course i will send a response from the server earlier saying that the request have been recieved.
My question is, lets say that there are 20 or even 100 requests of it at the same time, how can i get the app to handle that traffic? Even if it would it will take very long to perform all of those actions.
How can it be performed? Is there a preferd language to do so?
Is it possible with node?
If you are broadcasting same sms to many users, you should search for a broadcasting command in the sms api. This would decrease number of operations/bandwidth required on your server.
If broadcasting support doesn't exist, you should let other requests be fetched while processing those sms tasks. Something like:
function smsTask(n)
{
doSmsWork(function(response){
if(n)
{
setTimeout(function(){
smsTask(n-1);
},0);
}
})
}
smsTask(num);
Between if(n) and smsTask(n-1), other asynchronous tasks can find time to work, including fetching/intercepting new smsTasks and RAM requirement depends only on number of requests in-flight, not number of sms tasks. If you need to cap the bandwidth for all sms tasks, you can use a dynamical waiting value (instead of 0) for the setTimeout function and this can dedicate more bandwidth to other tasks (like serving web pages, etc) instead of being fully consumed by sms spam.
If you don't ask for asynchronously letting new requests be handled, then you can complete whole work per request much quicker than 2 minutes, if its I/O bound:
function smsTask(n)
{
for(let i=0;i<n;i++)
doSmsWork(function(response){
})
}
smsTask(num);
in perfect I/O conditions, this can complete within 200ms-300ms even for n=1000. This still doesn't stop new tasks/requests but puts too much pressure on the queue and probably consumes more memory while other version keeps stead memory consumption depending on number of requests in flight only.
If you need even less I/O contention, you can put the tasks into a queue and have a dedicated function for processing the queue:
let work = [];
function smsTask(n){ work.push(some_task(n)); }
setInterval(function(){
if(work.length > 0)
{
let task = work.shift();
task?.compute(); // does sms work
}
},1000);
This is steady-state processing and still doesn't stop because of sudden request spikes as long as work queue does not overflow memory. Maybe setTimeout version of this is better for CPU.

Spawn multiple threads from a single EJB #Asynchronous method

I'm building a Java EE application that one of its requirements is to send messages for registered e-mails (around 1000 to 2000). The access to that application is limited and in any time there will be less than 2 user logged in.
For sending e-mails I'm using JavaMail, a #Stateless bean and #Asynchronous method.
My problem is that it takes too long to send the 1000+ e-mails, around 1.2 secs for each e-mail in my development server. What should I do to reduce time ? Can I span multiple Stateless beans? Or in that case creating around 10 to 15 threads, with so low user access isn't a too bad?
Your performance problem is probably due to creating a new connection to send each message, as described in the JavaMail FAQ. Your code needs to be able to cache and reuse the connection to the mail server. A better approach for sending the messages asynchronously might be to put the information necessary to construct the message in a JMS message and then use a (pool of) MDB to process the information, turn it into a mail message, and send it, while caching and reusing the Transport object that represents the connection to the server.
You need to configure the async thread pool size inside your container, default size is usually 15 parallel threads. There isn't one thread per bean instance but if the pool fills up there will be a queue and max 15 sending at a time.

what's the best way to limit async send speed based on response handling speed in netty 4?

I'm writing a RPC client, which uses netty4 to do the networking. The client put requests to a map, and after receiving responses the requests' callback is triggered in channel handler and the requests are removed.
During benchmark test, my sending thread seemed to be sending too fast as the response latency increased to ~1 secs.
So what's the best way to control the speed of the sending thread based on the channel handler speed? Do I have to add another blocking queue so that if there are too many requests in the map, the sender got blocked against the queue.
Have you tried setting AUTO_READ option to false for Server Bootstrap. This will control what amount of data you read on one channel before another channel consumes it.

MultiThread or Multi Lists?

as I seen topic which not recommending more than 200 threads for server machine,
I am trying to implement Listener class which listens 1000 devices, I mean 1000 devices sending different type of messages to that application,
I tried 2 different way 1. create thread for each device runtime and dynamic list which hold the messages for that device and start thread for processing those messages from the list
but my machine not creating thread more than 50 :), and I agree its bad idea...
I created 10 different lists which holds the messages 10 different type of messages
and I created 10 processor thread for those list, which go to its relevant list and process the message and then delete it.
but here is the problem, let say I received 50 messages from 50 devices in List 1
by the time its list1's processor thread will go to last message (50th) its time will be expired which is 10 second
any idea to for best architecture that talk to more than 500 devices and process their different type of messages with in 10 seconds.
I am working in C#, my application connected with the server as a client using tcp/ip,
that server further connects with online devices, sending messages to server with device id and message data and message typ thn further I receiving messages from that server and then reply back through that server using device id,
I think you need to partition the system differently. The listeners should be high priority but only enqueue the requests. The queue should then be processed using a pool of workers. You could add prioritisation and other optimisations on the dequeuing side. In terms of getting every process done in 10s you will really be getting the second half of the system optimised.
Think of the traditional queuing system. You have a queue of work requests to process. Each request has a series of attributes. Lets same Name (string) and Priority (int). Once a the work request has been queued, other worker (thread/processes etc) can interrogate the queue to pull out items based on priority and process them.
To get the 10s I'd say as soon a worker has started processing the request a timer comes in to play and will mark that request as timed out in 10s unless the worker completes the task. Other workers can watch for results of the work in the queue and then handle the response behaviours.
use other Highly-concurrent programming models other than threaded, though threaded is one of the highly-concurrent models too.
if socket/tcpip/network messaging, please use epoll on Linux 2.6x and completion port on win/msvc.
see the docoument named EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list to learn more about highly-concurrent programming models. we only use 2 or 3 threads for multi-listeners and >1000 clients.

Resources