I use a Timer this way:
t = new Timer();
t.Interval = 10000;
t.Elapsed += ElapsedEvent;
and that's the ElapsedEvent method:
private void ElapsedEvent(object sender, ElapsedEventArgs e)
{
t.Stop();
try
{
var sessions = GetActiveSessions();
foreach (var session in sessions)
{
Task.Factory.StartNew(() => MyProcessTask(session));
}
}
catch (Exception ex)
{
}
t.Start();
}
i.e. it will be executed every 10000ms.
But the memory in RAM is increasing, ever, and I need to restart the Windows Service (where this code is executed).
Is the way I use Task.Factory incorrect? Does the memory allocated by thread never be released by Garbage collector this way?
Related
I have situation where i have to Use Task.Run In my ForEach loop
Requirement:
I'm going to be forced to manually kill thread
I have button where i can start and stop this Thread or Task.Run in For loop.
Problem
My problem is when i start the Task.Run method Its running but when i try to stop with using CancellationTokenSource or runningTaskThread.Abort(); it will not kill. its just stop when i start new Task.Run at that time it run with old thread so it become multiple thread every start process.
Code:
Below is my code for start Thread
var messages = rootObject.MultiQData.Messages.Where(m => m.TimeStamp > DateTime.Now).OrderBy(x => x.TimeStamp).ToList();
//Simulate MultiQ file in BackGroud
if (messages.Count > 0)
{
cancellationTokenSource = new CancellationTokenSource();
cancellationToken = cancellationTokenSource.Token;
Task.Factory.StartNew(
() =>
{
runningTaskThread = Thread.CurrentThread;
messages.ForEach(
m => SetUpTimer(m, rootObject.MultiQData.Connection.FleetNo));
}, cancellationToken);
}
For stop Task.Run
if (cancellationTokenSource != null)
{
if (cancellationToken.IsCancellationRequested)
return;
else
cancellationTokenSource.Cancel();
}
I have also use Thread with Thread.Abort but it is not working
Please Help to solve this issue
I got solution using timer.Stop(),timer.Dispose(). On creation of Thread i am calling SetUpTimer and this SetupTimer i have created multiple timer.
So on call of stop thread i have dispose timer and its work for me
For reference see below code
private void SetUpTimer(Message message, string fleetNo)
{
var ts = new MessageTimer();
var interval = (message.TimeStamp - DateTime.Now).TotalMilliseconds;
interval = interval <= 0 ? 100 : interval;
ts.MessageWrapper = new MessageWrapper(message, fleetNo);
ts.Interval = interval;
ts.Elapsed += ts_Elapsed;
ts.Start();
//Add timer in to the lost for disposing timer at time of stop Simulation
lsTimers.Add(ts);
}
private void StopTask()
{
try
{
// Attempt to cancel the task politely
if (cancellationTokenSource != null)
{
if (cancellationToken.IsCancellationRequested)
return;
else
cancellationTokenSource.Cancel();
}
//Stop All Timer
foreach (var timer in lsTimers)
{
timer.Stop();
timer.Dispose();
}
}
catch (Exception ex)
{
errorLogger.Error("Error while Stop simulation :", ex);
}
}
I have a call() method in my code, which based on certain conditions calls specific methods :
call(){
if(a){
methodA();
}
if(b){
methodB();
}
if(c){
methodC();
}
}
In the above scenario, I want to limit concurrent executions for methodC.
How can this be achieved?
What you need here is a Semaphore construct (check the bouncer/night club specification in the example).
// Create the semaphore with 3 slots, where 3 are available.
var bouncer = new Semaphore(3, 3);
call(){
if(a){
methodA();
}
if(b){
methodB();
}
if(c){
// Let a thread execute only after acquiring access (a semaphore to be released).
Bouncer.WaitOne();
methodC();
// This thread is done. Let someone else go for it
Bouncer.Release(1);
}
}
If you want to limit the number of concurrent executions to at most one at a time, then you should use a Lock. In Java it should look like:
final Lock lock = new ReentrantLock();
call() {
if(a) {
methodA();
}
if(b) {
methodB();
}
if(c) {
lock.lock();
try {
methodC();
} finally {
lock.unlock();
}
}
}
If you want to limit the number of concurrent executions to more than one at a time, you can use a Semaphore; here CONCURRENT_CALLS_ALLOWED is an int.
final Semaphore semaphore = new Semaphore(CONCURRENT_CALLS_ALLOWED);
call() {
if(a) {
methodA();
}
if(b) {
methodB();
}
if(c) {
semaphore.aquire();//throws checked exception
try {
methodC();
} finally {
semaphore.release();
}
}
}
Suppose I have a BlockingCollection OutputQueue, which has many items. Current my code is:
public void Consumer()
{
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
Thread.Sleep(500);
}
}
Now I want PlayMessage(workItem) running in the multiple tasks way because some workItem need more time, the others need less time. There are huge difference.
As for the method PlayMessage(workItem), it has a few service calls, play text to speech and some logging.
bool successRouting = serviceCollection.SvcCall_GetRoutingData(string[] params, out ex);
bool successDialingService = serviceCollection.SvcCall_GetDialingServiceData(string[] params, out excep);
PlayTTS(workItem.TaskType); // playing text to speech
So how to change my code?
What I thought was:
public async Task Consumer()
{
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
await PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
Thread.Sleep(500);
}
}
Since you want parallelism with your PlayMessage, i would suggest looking into TPL Dataflow, as it combines both parallel work with async, so you could await your work properly.
TPL Dataflow is constructed of Blocks, and each block has its own characteristics.
Some popular ones are:
ActionBlock<TInput>
TransformBlock<T, TResult>
I would construct something like the following:
var workItemBlock = new ActionBlock<WorkItem>(
workItem =>
{
PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
}, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = // Set max parallelism as you wish..
});
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
workItemBlock.Post(workItem);
}
workItemBlock.Complete();
Here's another solution, not based on TPL Dataflow. It uses uses SemaphoreSlim to throttle the number of parallel playbacks (warning, untested):
public async Task Consumer()
{
var semaphore = new SemaphoreSlim(NUMBER_OF_PORTS);
var pendingTasks = new HashSet<Task>();
var syncLock = new Object();
Action<Task> queueTaskAsync = async(task) =>
{
// be careful with exceptions inside "async void" methods
// keep failed/cancelled tasks in the list
// they will be observed outside
lock (syncLock)
pendingTasks.Add(task);
await semaphore.WaitAsync().ConfigureAwait(false);
try
{
await task;
}
catch
{
if (!task.IsCancelled && !task.IsFaulted)
throw;
// the error will be observed later,
// keep the task in the list
return;
}
finally
{
semaphore.Release();
}
// remove successfully completed task from the list
lock (syncLock)
pendingTasks.Remove(task);
};
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
var item = workItem;
Func<Task> workAsync = async () =>
{
await PlayMessage(item);
Console.WriteLine("Works on {0}", item.TaskID);
OutLog.Write("Works on {0}", item.TaskID);
Thread.Sleep(500);
});
var task = workAsync();
queueTaskAsync(task);
}
await Task.WhenAll(pendingTasks.ToArray());
}
My somewhat data-intensive wp7 app persists data as follows: I maintain a change journal reflecting all user activity, and every couple of seconds, a thread timer spins up a threadpool thread that flushes the change journal to a database inside a transaction. It looks something like this:
When the user exits, I stop the timer, flush the journal on the UI thread (takes no more than a second or two), and dismount the DB.
However, if the worker thread is active when the user exits, I can't figure out how to react gracefully. The system seems to kill the worker thread, so it never finishes its work and never gives up its lock on the database connection, and the ui thread then attempts to acquire the lock, and is immediately killed by the system. I tried setting a flag on the UI thread requesting the worker to abort, but I think the worker was interrupted before it read the flag. Everything works fine except for this 1 in 100 scenario where some user changes end up not being saved to the db, and I can't seem to get around this.
Very simplified code below:
private Timer _SweepTimer = new Timer(SweepCallback, null, 5000, 5000);
private volatile bool _BailOut = false;
private void SweepCallback(object state) {
lock (db) {
db.startTransaction();
foreach(var entry in changeJournal){
//CRUD entry as appropriate
if(_BailOut){
db.rollbackTransaction();
return;
}
}
db.endTransaction();
changeJournal.Clear();
}
}
private void RespondToSystemExit(){
_BailOut = true; //Set flag for worker to exit
lock(db){ //In theory, should acquire the lock after the bg thread bails out
SweepCallback(null);//Flush to db on the UI thread
db.dismount();//App is now ready to close
}
}
Well, just to close this question, I ended up using a manualresetevent instead of the locking, which is to the best of my understanding a misuse of the manualresetevent, risky and hacky, but its better than nothing.
I still don't know why my original code wasn't working.
EDIT: For posterity, I'm reposting the code to reproduce this from the MS forums:
//This is a functioning console app showing the code working as it should. Press "w" and then "i" to start and then interrupt the worker
using System;
using System.Threading;
namespace deadlocktest {
class Program {
static void Main(string[] args) {
var tester = new ThreadTest();
string input = "";
while (!input.Equals("x")) {
input = Console.ReadLine();
switch (input) {
case "w":
tester.StartWorker();
break;
case "i":
tester.Interrupt();
break;
default:
return;
}
}
}
}
class ThreadTest{
private Object lockObj = new Object();
private volatile bool WorkerCancel = false;
public void StartWorker(){
ThreadPool.QueueUserWorkItem((obj) => {
if (Monitor.TryEnter(lockObj)) {
try {
Log("Worker acquired the lock");
for (int x = 0; x < 10; x++) {
Thread.Sleep(1200);
Log("Worker: tick" + x.ToString());
if (WorkerCancel) {
Log("Worker received exit signal, exiting");
WorkerCancel = false;
break;
}
}
} finally {
Monitor.Exit(lockObj);
Log("Worker released the lock");
}
} else {
Log("Worker failed to acquire lock");
}
});
}
public void Interrupt() {
Log("UI thread - Setting interrupt flag");
WorkerCancel = true;
if (Monitor.TryEnter(lockObj, 5000)) {
try {
Log("UI thread - successfully acquired lock from worker");
} finally {
Monitor.Exit(lockObj);
Log("UI thread - Released the lock");
}
} else {
Log("UI thread - failed to acquire the lock from the worker");
}
}
private void Log(string Data) {
Console.WriteLine(string.Format("{0} - {1}", DateTime.Now.ToString("mm:ss:ffff"), Data));
}
}
}
Here is nearly identical code that fails for WP7, just make a page with two buttons and hook them
using System;
using System.Diagnostics;
using System.Threading;
using System.Windows;
using Microsoft.Phone.Controls;
namespace WorkerThreadDemo {
public partial class MainPage : PhoneApplicationPage {
public MainPage() {
InitializeComponent();
}
private Object lockObj = new Object();
private volatile bool WorkerCancel = false;
private void buttonStartWorker_Click(object sender, RoutedEventArgs e) {
ThreadPool.QueueUserWorkItem((obj) => {
if (Monitor.TryEnter(lockObj)) {
try {
Log("Worker acquired the lock");
for (int x = 0; x < 10; x++) {
Thread.Sleep(1200);
Log("Worker: tick" + x.ToString());
if (WorkerCancel) {
Log("Worker received exit signal, exiting");
WorkerCancel = false;
break;
}
}
} finally {
Monitor.Exit(lockObj);
Log("Worker released the lock");
}
} else {
Log("Worker failed to acquire lock");
}
});
}
private void Log(string Data) {
Debug.WriteLine(string.Format("{0} - {1}", DateTime.Now.ToString("mm:ss:ffff"), Data));
}
private void buttonInterrupt_Click(object sender, RoutedEventArgs e) {
Log("UI thread - Setting interrupt flag");
WorkerCancel = true;
//Thread.Sleep(3000); UNCOMMENT ME AND THIS WILL START TO WORK!
if (Monitor.TryEnter(lockObj, 5000)) {
try {
Log("UI thread - successfully acquired lock from worker");
} finally {
Monitor.Exit(lockObj);
Log("UI thread - Released the lock");
}
} else {
Log("UI thread - failed to acquire the lock from the worker");
}
}
}
}
Your approach should work when you operate from the Application_Deactivated or Application_Closing event. MSDN says:
There is a time limit for the Deactivated event to complete. The
device may terminate the application if it takes longer than 10
seconds to save the transient state.
So if you say it just takes just a few seconds this should be fine. Unless the docs don't tell the whole story. Or your worker thread takes longer to exit than you think.
As Heinrich Ulbricht already said you have <=10 sec to finish your stuff, but you should block MainThread to get them.
It means that even if you have BG thread with much work to do, but your UI thread just does nothing in OnClosingEvent/OnDeactivatingEvent - you will not get your 10 seconds.
Our application actually does eternal wait on UI thread in closing event to allow BG thread send some data thru sockets.
I'm experiencing an issue managing threads on .Net 4.0 C#, and my knowledge of threads is not sufficient to solve it, so I've post it here expecting that somebody could give me some piece of advise please.
The scenario is the following:
We have a Windows service on C# framework 4.0 that (1)connects via socket to a server to get a .PCM file, (2)then convert it to a .WAV file, (3)send it via Email - SMTP and finally (4)notify the initial server that it was successfully sent.
The server where the service had been installed has 8 processors and 8 GB or RAM.
To allow multiprocessing I've built the service with 4 threads, each one of them performs each task I mentioned previously.
On the code, I have classes and methods for each task, so I create threads and invoke methods as follows:
Thread eachThread = new Thread(object.PerformTask);
Inside each method I'm having a While that checks if the connection of the socket is alive and continue fetching data or processing data depending on their porpuse.
while (_socket.Connected){
//perform task
}
The problem is that as more services are being installed (the same windows service is replicated and pointed between two endpoints on the server to get the files via socket) the CPU consumption increases dramatically, each service continues running and processing files but there is a moment were the CPU consumption is too high that the server just collapse.
The question is: what would you suggest me to handle this scenario, I mean in general terms what could be a good approach of handling this highly demanded processing tasks to avoid the server to collapse in CPU consumption?
Thanks.
PS.: If anybody needs more details on the scenario, please let me know.
Edit 1
With CPU collapse I mean that the server gets too slow that we have to restart it.
Edit 2
Here I post some part of the code so you can get an idea of how it's programmed:
while(true){
//starting the service
try
{
IPEndPoint endPoint = conn.SettingConnection();
string id = _objProp.Parametros.IdApp;
using (socket = conn.Connect(endPoint))
{
while (!socket.Connected)
{
_log.SetLog("INFO", "Conectando socket...");
socket = conn.Connect(endPoint);
//if the connection failed, wait 5 seconds for a new try.
if (!socket.Connected)
{
Thread.Sleep(5000);
}
}
proInThread = new Thread(proIn.ThreadRun);
conInThread = new Thread(conIn.ThreadRun);
conOutThread = new Thread(conOut.ThreadRun);
proInThread.Start();
conInThread.Start();
conOutThread.Start();
proInThread.Join();
conInThread.Join();
conOutThread.Join();
}
}
}
Edit 3
Thread 1
while (_socket.Connected)
{
try
{
var conn = new AppConection(ref _objPropiedades);
try
{
string message = conn.ReceiveMessage(_socket);
lock (((ICollection)_queue).SyncRoot)
{
_queue.Enqueue(message);
_syncEvents.NewItemEvent.Set();
_syncEvents.NewResetEvent.Set();
}
lock (((ICollection)_total_rec).SyncRoot)
{
_total_rec.Add("1");
}
}
catch (SocketException ex)
{
//log exception
}
catch (IndexOutOfRangeException ex)
{
//log exception
}
catch (Exception ex)
{
//log exception
}
//message received
}
catch (Exception ex)
{
//logging error
}
}
//release ANY instance that could be using memory
_socket.Dispose();
log = null;
Thread 2
while (_socket.Connected)
{
try{
_syncEvents.NewItemEventOut.WaitOne();
if (_socket.Connected)
{
lock (((ICollection)_queue).SyncRoot)
{
total_queue = _queue.Count();
}
int i = 0;
while (i < total_queue)
{
//EMail Emails;
string mail = "";
lock (((ICollection)_queue).SyncRoot)
{
mail = _queue.Dequeue();
i = i + 1;
}
try
{
conn.SendMessage(_socket, mail);
_syncEvents.NewResetEvent.Set();
}
catch (SocketException ex)
{
//log exception
}
}
}
else
{
//log exception
_syncEvents.NewAbortEvent.Set();
Thread.CurrentThread.Abort();
}
}
catch (InvalidOperationException e)
{
//log exception
}
catch (Exception e)
{
//log exception
}
}
//release ANY instance that could be using memory
_socket.Dispose();
conn = null;
log = null;
Thread 3
while (_socket.Connected)
{
int total_queue = 0;
try
{
_syncEvents.NewItemEvent.WaitOne();
lock (((ICollection) _queue).SyncRoot)
{
total_queue = _queue.Count();
}
int i = 0;
while (i < total_queue)
{
if (mgthreads.GetThreatdAct() <
mgthreads.GetMaxThread())
{
string message = "";
lock (((ICollection) _queue).SyncRoot)
{
message = _queue.Dequeue();
i = i + 1;
}
count++;
lock (((ICollection) _queueO).SyncRoot)
{
app.SetParameters(_socket, _id,
message, _queueO, _syncEvents,
_total_Env, _total_err);
}
Thread producerThread = new
Thread(app.ThreadJob) { Name =
"ProducerThread_" +
DateTime.Now.ToString("ddMMyyyyhhmmss"),
Priority = ThreadPriority.AboveNormal
};
producerThread.Start();
producerThread.Join();
mgthreads.IncThreatdAct(producerThread);
}
mgthreads.DecThreatdAct();
}
mgthreads.DecThreatdAct();
}
catch (InvalidOperationException e)
{
}
catch (Exception e)
{
}
Thread.Sleep(500);
}
//release ANY instance that could be using memory
_socket.Dispose();
app = null;
log = null;
mgthreads = null;
Thread 4
MessageVO mesVo =
fac.ParseMessageXml(_message);
I would lower the thread priority and have all threads pass through a Semaphore that limits concurrency to Environment.ProcessorCount. This not a perfect solution but it sounds like it is enough in this case and an easy fix.
Edit: Thinking about it, you have to fold the 10 services into one single process because otherwise you won't have centralized control about the threads that are running. If you have 10 independent processes they cannot coordinate.
There should normally be no collapse because of high cpu usage. While any of the threads is waiting for something remote to happen (for instance for the remote server to response to the request), that thread uses no cpu resource. But while it is actually doing something, it uses cpu accordingly. In the Task you mentioned, there is no inherent high cpu usage (as the saving of PCM file as WAV requires no complex algorithm), so the high cpu usage seems to be a sign of an error in programming.