How to use multithreading in c# for sending the SMS to multiple person at a times? Use must of multithread. means must execute sms sending code/process independently at a time. (synchronisely) how can i do this ? please guide.
Start reading the documentation - or a book like "c# in 21 days".
System.Threading is your namespace for threads. Opening a thread is trivial, but I would not go that way.
Look into ThreadPool and queue a WorkItem for every SMS. The ThreadPool will automatically start threads. This is more memory efficient than using static threads, especially if you use that in multiple places of your application (as threads may get shared).
There are ample of samples for using WorkItems.
http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem%28VS.71%29.aspx
is a decent start documentation wise.
Well, In my project in WCF service I use for instance ThreadPool class for sending emails. In that case emails will be quequed and this will ensure that service will not "hang". Creating lots of different threads may lead to clogging of the system
Related
I am creating a web service that creates a huge amount of small java timer threads over (10k). I can only seem to create 2k timer threads before I get the OutOfMemoryError: unable to create new native thread. How do i solve this? I am using a macbook pro to run my Tomcat server on. I'v configured the ulimit (-u) max user processes to double what it used to be but I still get the same problem. What are my options, if any, to make this doable?
It's often a bad idea for web applications to start their own (few) threads, let alone 10K threads - and then "as timers"? Seriously? Don't go there.
What can you do?
Don't rely on the ability to create those threads.
Change your architecture! Use a scheduler library that has solved this problem already (e.g. Quartz or others).
If you don't want to use an external library (why wouldn't you?): Implement a single timer thread that executes the scheduled operations when they're due. Do not use a new thread for each scheduled operation
If you wanted to boil 100 eggs, would you buy 100 timers?
I am in the process of moving a newsletter service from a Windows server running Microsoft.NET 4.5 to a Linux server running Mono 3.0.3. The service uses Amazon's "Simple Email Service" (SES) to deliver the emails, via the official .NET SDK (wrapping a REST interface).
While sending emails via SES sequentially from Mono turns out to be slightly faster than Microsoft.NET using similar hardware, I am running into serious performance trouble when attempting to deliver multiple mails in parallel. Below is a chart showing the time required to send 128 emails on both platforms using a varying number of threads. As you can see, performance on Mono degrades rapidly after 8 threads, and with 128 threads I get only HTTP timeouts – not a single email is delivered.
Profiling via console output, it turns out that the first "batch" of mails is the source of the slowdown. With two threads, sending one email in each, both threads finish in around 2200 ms. With four threads, sending one email in each, they all finish in around 4400 ms. Eight threads, around 8800 ms, etc. It seems as if the web service, while spawned simultaneously, are run sequentially and are required to wait for one another before returning.
Any ideas what might be triggering this behavior? The source code for the Amazon SDK is available on GitHub, but I have not been able to pinpoint anything suspiciously. Maybe the use of the async methods on HttpWebRequest?
Yes, stop using async HttpWebRequest* for now because there is a bug being discussed in the Mono list. A patch has been provided, but apparently is not good enough and has been reverted from master.
If you're good with low level code, it would be nice that you contribute a patch.
* The fastest way to stop using the async infrastructure is calling mono witht eh environment variable MONO_DISABLE_AIO=1. By the way, if you're using more than one thread anyway, maybe a Parallel.For would be enough but keeping the code non-asynchronous? The best use-case of async is actually to avoid threading and still manage to achieve parallelization (or rather, avoid blocking waits).
I know the term "Load Balancing" can be very broad, but the subject I'm trying to explain is more specific, and I don't know the proper terminology. What I'm building is a set of Server/Client applications. The server needs to be able to handle a massive amount of data transfer, as well as client connections, so I started looking into multi-threading.
There's essentially 3 ways I can see implementing any sort of threading for the server...
One thread handling all requests (defeats the purpose of a thread if 500 clients are logged in)
One thread per user (which is risky to create 1 thread for each of the 500 clients)
Pool of threads which divide the work evenly for any number of clients (What I'm seeking)
The third one is what I'd like to know. This consists of a setup like this:
Maximum 250 threads running at once
500 clients will not create 500 threads, but share the 250
A Queue of requests will be pending to be passed into a thread
A thread is not tied down to a client, and vice-versa
Server decides which thread to send a request to based on activity (load balance)
I'm currently not seeking any code quite yet, but information on how a setup like this works, and preferably a tutorial to accomplish this in Delphi (XE2). Even a proper word or name to put on this subject would be sufficient so I can do the searching myself.
EDIT
I found it necessary to explain a little about what this will be used for. I will be streaming both commands and images, there will be a double-socket setup where there's one "Main Command Socket" and another "Add-on Image Streaming Socket". So really one connection is 2 socket connections.
Each connection to the server's main socket creates (or re-uses) an object representing all the data needed for that connection, including threads, images, settings, etc. For every connection to the main socket, a streaming socket is also connected. It's not always streaming images, but the command socket is always ready.
The point is that I already have a threading mechanism in my current setup (1 thread per session object) and I'd like to shift that over to a pool-like multithreading environment. The two connections together require a higher-level control over these threads, and I can't rely on something like Indy to keep these synchronized, I'd rather know how things are working than to learn to trust something else to do the work for me.
IOCP server. It's the only high-performance solution. It's essentially asynchronous in user mode, ('overlapped I/O in M$-speak), a pool of threads issue WSARecv, WSASend, AcceptEx calls and then all wait on an IOCP queue for completion records. When something useful happens, a kernel threadpool performs the actual I/O and then queues up the completion records.
You need at least a buffer class and socket class, (and probably others for high-performance - objectPool and pooledObject classes so you can make socket and buffer pools).
500 threads may not be an issue on a server class computer. A blocking TCP thread doesn't do much while it's waiting for the server to respond.
There's nothing stopping you from creating some type of work queue on the server side, served by a limited size pool of threads. A simple thread-safe TList works great as a queue, and you can easily put a message handler on each server thread for notifications.
Still, at some point you may have too much work, or too many threads, for the server to handle. This is usually handled by adding another application server.
To ensure scalability, code for the idea of multiple servers, and you can keep scaling by adding hardware.
There may be some reason to limit the number of actual work threads, such as limiting lock contention on a database, or something similar, however, in general, you distribute work by adding threads, and let the hardware (CPU, redirector, switch, NAS, etc.) schedule the load.
Your implementation is completely tied to the communications components you use. If you use Indy, or anything based on Indy, it is one thread per connection - period! There is no way to change this. Indy will scale to 100's of connections, but not 1000's. Your best hope to use thread pools with your communications components is IOCP, but here your choices are limited by the lack of third-party components. I have done all the investigation before and you can see my question at stackoverflow.com/questions/7150093/scalable-delphi-tcp-server-implementation.
I have a fully working distributed development framework (threading and comms) that has been used in production for over 3 years now across more than a half-dozen separate systems and basically covers everything you have asked so far. The code can be found on the web as well.
I am building an application where I have inputs from printers over the network (on specific ports) and other files which are created into a folder locally or through the network. The user can create different threads to monitor different folders at the same time, as well as threads to handle the input from threes printers over the network. The application is supposed to process the input data according to its type and output it. On the other end of the application, there would be 4 threads waiting for input data from the input threads (could be 10 or 20 threads) to process and apply 4 different tasks.
As we will have many threads running at the same time, I thought I would use MSMQ to manage these threads. Does using MSMQ fit in this scenario or should I use another technique? Managing these threads in terms of scheduling, prioritizing, etc.
(P.S: I was thinking to build my own ThreadEngine class that will take care of all of these things until I heard about MSMQ, which am still not sure if it’s the right thing to use)
MSMQ would be useful for managing your input/output data not for your threads. .Net already has the ThreadPool, the CCR and the TPL to assist you with concurrency and multithreading so I would suggest reading up on those technologies and choosing the most appropriate one.
MSMQ is a system message queue, not a thread pool manager.
This could be interesting in a case where you don't really mind poor performance and are really going for a system where tasks are persistent and transactional to guarantee execution.
If you are looking for performance then I agree with other folks and highly discourage you from doing this - even with non-durable (ram queues).
I am writing an application server (again, non-related with a question I already posted here) and I am wondering what are the strategies to use when creating worker threads that work on the database. Some preliminary dates: the server receives xml and sends back xml, all the requests query a database - each request could take a few milliseconds to a few seconds.
Say for example that your server services a small to medium number of clients which in turn send a small number of requests per connection. Is it safe to have one worker thread per connection or should it be per request? Also should a thread pool be used to limit the resources used by the server or a worker should be added each time a new connection/request is made?
Should the server limit the number of threads it creates to an upper limit?
Hope I am not too vague ... I can hardly keep my eyes open.
If you don't have extensive experience writing application servers is a daunting task. It can be eased by using frameworks like ACE that allow you to build different configurations of your app serving infrastructure like thread per connection, thread pools, leader follower and then load the appropriate configuration with an extensible service framework.
I would recommend to read these books on ACE to get
C++ Network Programming: Mastering Complexity Using ACE and Patterns
C++ Network Programming: Systematic Reuse with ACE and Frameworks
to get an idea about what the framework can do for you.
The way I write apps like this is to make the number of threads configurable via the command line and/or a configuration file. I then do some load testing with different numbers of threads - there is always an optimal number beyond which performance begins to degrade.
If you follow the model adopted by Java EE app server developers, there's a queue for incoming requests and a pool of worker threads to service them. It's one thread per request. When a worker thread fulfills a request it goes back into the pool. If the incoming requests show up faster than the worker thread pool can service them, the queue allows them to stack up until a worker thread is released. Both the queue size and the thread pool can be tuned to match for your situation.
I'd wonder why anyone would feel the need to write their own server from scratch, especially when the scenario you describe is solved so well by others. If your wish is education, good luck. If you think you're going to improve on what's been done in the past, I'd re-examine that assumption.