C# - how to get cpu, memory and disk usage in % as seen in task manager - c#-4.0

Those parameters can be seen in a Task Manager.

The answer maybe a bit huge but it will answer the entire question except for the DISK USAGE part.
Getting the CPU utilization in percentage(%)
Not sure if WMI can be used for this case but i prefer not to use WMI because not all the machines support WMI.I will try to use other classes such as PerformanceCounter or so for the solutions.Here's a sample code that will return the CPU utilization in %
public FinalResult as float;
public async void GetCPUCounter()
{
CounterSample firstValue = cpuCounter.NextSample();
await Task.Delay(900); //delay is required because first call will always return 0
CounterSample secondValue = cpuCounter.NextSample();
FinalResult = CounterSample.Calculate(firstValue, secondValue);
await Task.Delay(900);
GetCPUCounter(); //calling again to get repeated values
}
Now, just use a Windows.Forms.Timer to repeatedly get the cpu utilization.Before that, just call GetCPUCounter() once from anywhere in your code, let's say from Form_load event :
private void Form_load(...)
{
GetCPUCounter();
}
private void mytimer_Tick(....)
{
string cpuUsage = finalresult.ToString() + "%"
}
Getting the memory utilization in percentage(%)
Here's a full class with 2 async methods that will return both memory usage and total memory:
using System;
using System.Diagnostics;
using System.Threading.Tasks;
using Microsoft.VisualBasic.Devices;
public class Memory
{
public int TotalRamInMb;
public int TotalRamInGb;
public double UsedRam;
public int UsedRamPercentage;
public string GetTotalRam()
{
var CI = new ComputerInfo();
var mem = ulong.Parse(CI.TotalPhysicalMemory.ToString());
int Mb = Convert.ToInt16(mem / (double)(1024 * 1024));
TotalRamInMb = Mb;
if (Mb.ToString().Length <= 3)
return Mb.ToString() + " MB physical memory";
else
{
return (Convert.ToInt16(Mb / (double)1024)).ToString() + " GB physical memory";
TotalRamInGb = Convert.ToInt16(Mb / (double)1024);
}
}
public async void GetUsedRam()
{
double URam;
Process[] allProc = Process.GetProcesses();
foreach (var proc in allProc)
URam += ((proc.PrivateMemorySize64) / (double)(1024 * 1024));
UsedRam = URam;
UsedRamPercentage = (UsedRam * 100) / TotalRamInMb;
await Task.Delay(900);
GetUsedRam();
}
}
Get the values from the variables such as UsedRam, UsedRamPercentage,TotalRamInMb and so... And make sure to call GetUsedRam() once like we did with GetCPUCounter and then use a Forms.Timer to repeatedly get the values from the above variables.
I will update the answer soon by adding Disk usage retrieval process . Cheers!

Related

How to get total CPU % and Memory% usage on Ubuntu in using .NET core application

Even PerformanceCounter is supported in .NET core, but it is not supported on the Ubuntu OS, so is there any way to get the system overall CPU and memory usage in a .NET core application (like the task manager shows in Windows)?
After some searching work, I did it by below codes (some codes are from the googling result). just FYI
internal static class CpuMemoryMetrics4LinuxUtils
{
private const int DigitsInResult = 2;
private static long totalMemoryInKb;
/// <summary>
/// Get the system overall CPU usage percentage.
/// </summary>
/// <returns>The percentange value with the '%' sign. e.g. if the usage is 30.1234 %,
/// then it will return 30.12.</returns>
public static double GetOverallCpuUsagePercentage()
{
// refer to https://stackoverflow.com/questions/59465212/net-core-cpu-usage-for-machine
var startTime = DateTime.UtcNow;
var startCpuUsage = Process.GetProcesses().Sum(a => a.TotalProcessorTime.TotalMilliseconds);
System.Threading.Thread.Sleep(500);
var endTime = DateTime.UtcNow;
var endCpuUsage = Process.GetProcesses().Sum(a => a.TotalProcessorTime.TotalMilliseconds);
var cpuUsedMs = endCpuUsage - startCpuUsage;
var totalMsPassed = (endTime - startTime).TotalMilliseconds;
var cpuUsageTotal = cpuUsedMs / (Environment.ProcessorCount * totalMsPassed);
return Math.Round(cpuUsageTotal * 100, DigitsInResult);
}
/// <summary>
/// Get the system overall memory usage percentage.
/// </summary>
/// <returns>The percentange value with the '%' sign. e.g. if the usage is 30.1234 %,
/// then it will return 30.12.</returns>
public static double GetOccupiedMemoryPercentage()
{
var totalMemory = GetTotalMemoryInKb();
var usedMemory = GetUsedMemoryForAllProcessesInKb();
var percentage = (usedMemory * 100) / totalMemory;
return Math.Round(percentage, DigitsInResult);
}
private static double GetUsedMemoryForAllProcessesInKb()
{
var totalAllocatedMemoryInBytes = Process.GetProcesses().Sum(a => a.PrivateMemorySize64);
return totalAllocatedMemoryInBytes / 1024.0;
}
private static long GetTotalMemoryInKb()
{
// only parse the file once
if (totalMemoryInKb > 0)
{
return totalMemoryInKb;
}
string path = "/proc/meminfo";
if (!File.Exists(path))
{
throw new FileNotFoundException($"File not found: {path}");
}
using (var reader = new StreamReader(path))
{
string line = string.Empty;
while (!string.IsNullOrWhiteSpace(line = reader.ReadLine()))
{
if (line.Contains("MemTotal", StringComparison.OrdinalIgnoreCase))
{
// e.g. MemTotal: 16370152 kB
var parts = line.Split(':');
var valuePart = parts[1].Trim();
parts = valuePart.Split(' ');
var numberString = parts[0].Trim();
var result = long.TryParse(numberString, out totalMemoryInKb);
return result ? totalMemoryInKb : throw new FileFormatException($"Cannot parse 'MemTotal' value from the file {path}.");
}
}
throw new FileFormatException($"Cannot find the 'MemTotal' property from the file {path}.");
}
}
}
You have to rely on the OS specific utilities that provide CPU and memory information.
Run the command from your application and read/parse output returned.
I found an article which looks in line with what you are trying to achieve.
Reading Windows and Linux memory metrics with .NET Core

Performance testing: One core vs multiple

I'm trying to understand one problem that I encountered recently in my project. I'm using Aurigma library to resize images. It is used in the single thread mode and produce only one thread during calculation. Lately I decided to move to ImageMagick project, because it is free and open source. I've built IM in the single thread mode and started to test. At first I wanted to compare their performance without interruptions, so I created a test that has high priorities for a process and its thread. Also, I set affinity to one core. I got that IM faster than Aurigma on ~25%. But than more threads I added than less IM had advantage against Aurigma.
My project is a windows service that starts about 7-10 child processes. Each process has 2 threads to process images. When I run my test as two different processes with 2 threads each, I noticed that IM worked worse than Aurigma on about 5%.
Maybe my question is not very detailed, but this scope is a little new for me and I would be glad to get direction for further investigation. How can it be that one program works faster if it is run on one thread in one process, but worse if it is run in multiple processes at the same time.
Fro example,
Au: 8 processes x 2Th(20 tasks per thread) = 320 tasks for 245 secs
IM: 8 processes x 2Th(20 tasks per thread) = 320 tasks for 280 secs
Au: 4 processes x 2Th(20 tasks per thread) = 160 tasks for 121 secs
IM: 4 processes x 2Th(20 tasks per thread) = 160 tasks for 141 secs
We can see that Au works better if we have more that 1 process, but in single process mode: Au process one task for 2,2 sec, IM for 1,4 sec and the sum time is better for IM
private static void ThreadRunner(
Action testFunc,
int repeatCount,
int threadCount
)
{
WaitHandle[] handles = new WaitHandle[threadCount];
var stopwatch = new Stopwatch();
// warmup
stopwatch.Start();
for (int h = 0; h < threadCount; h++)
{
var handle = new ManualResetEvent(false);
handles[h] = handle;
var thread = new Thread(() =>
{
Runner(testFunc, repeatCount);
handle.Set();
});
thread.Name = "Thread id" + h;
thread.IsBackground = true;
thread.Priority = ThreadPriority.Normal;
thread.Start();
}
WaitHandle.WaitAll(handles);
stopwatch.Stop();
Console.WriteLine("All Threads Total time taken " + stopwatch.ElapsedMilliseconds);
}
private static void Runner(
Action testFunc,
int count
)
{
//Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2); // Use only the second core
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.BelowNormal;
Process.GetCurrentProcess().PriorityBoostEnabled = false;
Thread.CurrentThread.Priority = ThreadPriority.Normal;
var stopwatch = new Stopwatch();
// warmup
stopwatch.Start();
while(stopwatch.ElapsedMilliseconds < 10000)
testFunc();
stopwatch.Stop();
long elmsec = 0;
for (int i = 0; i < count; i++)
{
stopwatch.Reset();
stopwatch.Start();
testFunc();
stopwatch.Stop();
elmsec += stopwatch.ElapsedMilliseconds;
Console.WriteLine("Ticks: " + stopwatch.ElapsedTicks +
" mS: " + stopwatch.ElapsedMilliseconds + " Thread name: " + Thread.CurrentThread.Name);
}
Console.WriteLine("Total time taken " + elmsec + " Thread name: " + Thread.CurrentThread.Name);
}
/// <summary>
/// Entry point
/// </summary>
/// <param name="args"></param>
private static void Main(string[] args)
{
var files = GetFiles(args.FirstOrDefault());
if (!files.Any())
{
Console.WriteLine("Source files were not found.");
goto End;
}
//// Run tests
Console.WriteLine("ImageMagick run... Resize");
Runner(() => PerformanceTests.RunResizeImageMagickTest(files[0]), 20);
Console.WriteLine("Aurigma run... Resize");
Runner(() => PerformanceTests.RunResizeAurigmaTest(files[0]), 20);
Console.WriteLine("ImageMagick run... multi Resize");
ThreadRunner(() => PerformanceTests.RunResizeImageMagickTest(files[0]), 20, 2);
Console.WriteLine("Aurigma run... multi Resize");
ThreadRunner(() => PerformanceTests.RunResizeAurigmaTest(files[0]), 20, 2);
End:
Console.WriteLine("Done");
Console.ReadKey();
}
public static void RunResizeImageMagickTest(string source)
{
float[] ratios = { 0.25f, 0.8f, 1.4f };
// load the source bitmap
using (MagickImage bitmap = new MagickImage(source))
{
foreach (float ratio in ratios)
{
// determine the target image size
var size = new Size((int)Math.Round(bitmap.Width * ratio), (int)Math.Round(bitmap.Height * ratio));
MagickImage thumbnail = null;
try
{
thumbnail = new MagickImage(bitmap);
// scale the image down
thumbnail.Resize(size.Width, size.Height);
}
finally
{
if (thumbnail != null && thumbnail != bitmap)
{
thumbnail.Dispose();
}
}
}
}
}
public static void RunResizeAurigmaTest(string source)
{
float[] ratios = { 0.25f, 0.8f, 1.4f };
//// load the source bitmap
using (ABitmap bitmap = new ABitmap(source))
{
foreach (float ratio in ratios)
{
// determine the target image size
var size = new Size((int)Math.Round(bitmap.Width * ratio), (int)Math.Round(bitmap.Height * ratio));
ABitmap thumbnail = null;
try
{
thumbnail = new ABitmap();
// scale the image down
using (var resize = new Resize(size, InterpolationMode.HighQuality))
{
resize.ApplyTransform(bitmap, thumbnail);
}
}
finally
{
if (thumbnail != null && thumbnail != bitmap)
{
thumbnail.Dispose();
}
}
}
}
}
Code for testing is added. I use C#/.NET, ImageMagick works through ImageMagick.Net library, for Aurigma there is one too. For IM .net lib is written on C++/CLI, IM is C. A lot of languages are used.
OpenMP for IM is off.
Could be a memory cache issue. It is possible that multiple threads utilizing memory in a certain way create a scenario that one thread invalidates cache memory that another thread was using, causing a stall.
Programs that are not purely number crunching, but rely on a lot of IO (CPU<->Memory) are more difficult to analyze.

Tight Loop - Disk at 100%, Quad Core CPU #25% useage, only 15MBsec disk write speed

I have a tight loop which runs through a load of carts, which themselves contain around 10 events event objects and writes them to the disk in JSON via an intermediate repository (jOliver common domain rewired with GetEventStore.com):
// create ~200,000 carts, each with ~5 events
List<Cart> testData = TestData.GenerateFrom(products);
foreach (var cart in testData)
{
count = count + (cart as IAggregate).GetUncommittedEvents().Count;
repository.Save(cart);
}
I see the disk says it is as 100%, but the throughout is 'low' (15MB/sec, ~5,000 events per second) why is this, things i can think of are:
Since this is single threaded does the 25% CPU usage actually mean 100% of the 1 core that I am on (any way to show specific core my app is running on in Visual Studio)?
Am i constrained by I/O, or by CPU? Can I expect better performance if i create my own thread pool one for each CPU?
How come I can copy a file at ~120MB/sec, but I can only get throughput of 15MB/sec in my app? Is this due to the write size of lots of smaller packets?
Anything else I have missed?
The code I am using is from the geteventstore docs/blog:
public class GetEventStoreRepository : IRepository
{
private const string EventClrTypeHeader = "EventClrTypeName";
private const string AggregateClrTypeHeader = "AggregateClrTypeName";
private const string CommitIdHeader = "CommitId";
private const int WritePageSize = 500;
private const int ReadPageSize = 500;
IStreamNamingConvention streamNamingConvention;
private readonly IEventStoreConnection connection;
private static readonly JsonSerializerSettings serializerSettings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.None };
public GetEventStoreRepository(IEventStoreConnection eventStoreConnection, IStreamNamingConvention namingConvention)
{
this.connection = eventStoreConnection;
this.streamNamingConvention = namingConvention;
}
public void Save(IAggregate aggregate)
{
this.Save(aggregate, Guid.NewGuid(), d => { });
}
public void Save(IAggregate aggregate, Guid commitId, Action<IDictionary<string, object>> updateHeaders)
{
var commitHeaders = new Dictionary<string, object>
{
{CommitIdHeader, commitId},
{AggregateClrTypeHeader, aggregate.GetType().AssemblyQualifiedName}
};
updateHeaders(commitHeaders);
var streamName = this.streamNamingConvention.GetStreamName(aggregate.GetType(), aggregate.Identity);
var newEvents = aggregate.GetUncommittedEvents().Cast<object>().ToList();
var originalVersion = aggregate.Version - newEvents.Count;
var expectedVersion = originalVersion == 0 ? ExpectedVersion.NoStream : originalVersion - 1;
var eventsToSave = newEvents.Select(e => ToEventData(Guid.NewGuid(), e, commitHeaders)).ToList();
if (eventsToSave.Count < WritePageSize)
{
this.connection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave).Wait();
}
else
{
var startTransactionTask = this.connection.StartTransactionAsync(streamName, expectedVersion);
startTransactionTask.Wait();
var transaction = startTransactionTask.Result;
var position = 0;
while (position < eventsToSave.Count)
{
var pageEvents = eventsToSave.Skip(position).Take(WritePageSize);
var writeTask = transaction.WriteAsync(pageEvents);
writeTask.Wait();
position += WritePageSize;
}
var commitTask = transaction.CommitAsync();
commitTask.Wait();
}
aggregate.ClearUncommittedEvents();
}
private static EventData ToEventData(Guid eventId, object evnt, IDictionary<string, object> headers)
{
var data = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(evnt, serializerSettings));
var eventHeaders = new Dictionary<string, object>(headers)
{
{
EventClrTypeHeader, evnt.GetType().AssemblyQualifiedName
}
};
var metadata = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(eventHeaders, serializerSettings));
var typeName = evnt.GetType().Name;
return new EventData(eventId, typeName, true, data, metadata);
}
}
It was partially mentioned in the comments, but to enhance on that, as you are working fully single-threaded in the mentioned code (though you use async, you are just waiting for them, so effectively working sync) you are suffering from latency and overhead for context switching and EventStore protocol back and forth. Either really go the async route, but avoid waiting on the async threads and rather parallelize it (EventStore likes parallelization because it can batch multiple writes) or do batching yourself and send, for example, 20 events at a time.

Stack size difference for Thread and Process

I have recently observed in Java (while implementing a deep recursive function call), that the stack size for thread is more than the process.
With this I mean, E.g. The thread could execute approx 30,000 recursive calls
while the program without thread could only go to 10,000 recursive calls to the same function.
Can any one suggest why is it so?
For better understanding and context, Please try to run the Java code as it is and see the messages printout on the console....
package com.java.concept;
/**
* This provides a mechanism to increase the call stack size, by starting the thread in the caller we can increase it
* Result were 3 times higher
*/
public class DeepRecursionCallStack {
private static int level = 0;
public static long fact(int n) {
level++;
return n < 2 ? n : n * fact(n - 1);
}
public static void main(String[] args) throws InterruptedException {
Thread t = new Thread(null, null, "DeepRecursionCallStack", 1000000) {
#Override
public void run() {
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("New thread : true recursion level was " + level);
System.err.println("New thread : reported recursion level was "
+ e.getStackTrace().length);
}
}
};
t.start();
t.join();
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("Main code : true recursion level was " + level);
System.err.println("Main code : reported recursion level was "
+ e.getStackTrace().length);
}
}
}

Does ConcurrencyMode of Multiple have relevance when InstanceContextMode is PerCall for a WCF service with Net.Tcp binding?

I always thought that setting InstanceContextMode to PerCall makes concurrency mode irrelevant even if using a session aware binding like net.tcp. This is what MSDN says
http://msdn.microsoft.com/en-us/library/ms731193.aspx
"In PerCallinstancing, concurrency is not relevant, because each message is processed by a new InstanceContext and, therefore, never more than one thread is active in the InstanceContext."
But today I was going through Juval Lowy's book Programming WCF Services and he writes in Chapter 8
If the per-call service has a transport-level session, whether
concurrent processing of calls is allowed is a product of the service
concurrency mode. If the service is configured with
ConcurrencyMode.Single, concurrent processing of the pending
calls is not al lowed, and the calls are dispatched one at a time.
[...] I consider this to be a flawed design. If the service is
configured with ConcurrencyMode.Multiple, concurrent pro- cessing is
allowed. Calls are dispatched as they arrive, each to a new instance,
and execute concurrently. An interesting observation here is that in
the interest of through- put, it is a good idea to configure a
per-call service with ConcurrencyMode.Multiple— the instance itself
will still be thread-safe (so you will not incur the synchronization
liability), yet you will allow concurrent calls from the same client.
This is contradicting my understanding and what MSDN says. Which is correct ?
In my case I have a WCF Net.Tcp service used my many client applications that creates a new proxy object, makes the call and then immediately closes the proxy. The service has PerCall InstanceContextMode. Will I get improved throughput if I change the InstanceContextMode to Multiple with no worse thread safety behaviour than percall ?
The key phrase in reading Lowy’s statement is “in the interest of throughput”. Lowy is pointing out that when using ConcurrencyMode.Single WCF will blindly implement a lock to enforce serialization to the service instance. Locks are expensive and this one isn’t necessary because PerCall already guarantees that a second thread will never try to call the same service instance.
In terms of behavior:
ConcurrencyMode does not matter for a PerCall service instance.
In terms of performance:
A PerCall service that is ConcurrencyMode.Multiple should be slightly faster because its not creating and acquiring the (unneeded) thread lock that ConcurrencyMode.Single is using.
I wrote a quick benchmark program to see if I could measure the performance impact of Single vs Multiple for a PerCall service: The benchmark showed no meaningful difference.
I pasted in the code below if you want to try running it yourself.
Test cases I tried:
600 threads calling a service 500 times
200 threads calling a service 1000 times
8 threads calling a service 10000 times
1 thread calling a service 10000 times
I ran this on a 4 CPU VM running Service 2008 R2. All but the 1 thread case was CPU constrained.
Results:
All the runs were within about 5% of eachother.
Sometimes ConcurrencyMode.Multiple was faster. Sometimes ConcurrencyMode.Single was faster. Maybe a proper statistical analysis could pick a winner. In my opinion they are close enough to not matter.
Here’s a typical output:
Starting Single Service on net.pipe://localhost/base...
Type=SingleService ThreadCount=600 ThreadCallCount=500
runtime: 45156759 ticks 12615 msec
Starting Multiple Service on net.pipe://localhost/base...
Type=MultipleService ThreadCount=600 ThreadCallCount=500
runtime: 48731273 ticks 13613 msec
Starting Single Service on net.pipe://localhost/base...
Type=SingleService ThreadCount=600 ThreadCallCount=500
runtime: 48701509 ticks 13605 msec
Starting Multiple Service on net.pipe://localhost/base...
Type=MultipleService ThreadCount=600 ThreadCallCount=500
runtime: 48590336 ticks 13574 msec
Benchmark Code:
Usual caveat: This is benchmark code that takes short cuts that aren’t appropriate for production use.
using System;
using System.Collections.Generic;
using System.Linq;
using System.ServiceModel;
using System.ServiceModel.Description;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace WCFTest
{
[ServiceContract]
public interface ISimple
{
[OperationContract()]
void Put();
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Single)]
public class SingleService : ISimple
{
public void Put()
{
//Console.WriteLine("put got " + i);
return;
}
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class MultipleService : ISimple
{
public void Put()
{
//Console.WriteLine("put got " + i);
return;
}
}
public class ThreadParms
{
public int ManagedThreadId { get; set; }
public ServiceEndpoint ServiceEndpoint { get; set; }
}
public class BenchmarkService
{
public readonly int ThreadCount;
public readonly int ThreadCallCount;
public readonly Type ServiceType;
int _completed = 0;
System.Diagnostics.Stopwatch _stopWatch;
EventWaitHandle _waitHandle;
bool _done;
public BenchmarkService(Type serviceType, int threadCount, int threadCallCount)
{
this.ServiceType = serviceType;
this.ThreadCount = threadCount;
this.ThreadCallCount = threadCallCount;
_done = false;
}
public void Run(string baseAddress)
{
if (_done)
throw new InvalidOperationException("Can't run twice");
ServiceHost host = new ServiceHost(ServiceType, new Uri(baseAddress));
host.Open();
Console.WriteLine("Starting " + ServiceType.Name + " on " + baseAddress + "...");
_waitHandle = new EventWaitHandle(false, EventResetMode.ManualReset);
_completed = 0;
_stopWatch = System.Diagnostics.Stopwatch.StartNew();
ServiceEndpoint endpoint = host.Description.Endpoints.Find(typeof(ISimple));
for (int i = 1; i <= ThreadCount; i++)
{
// ServiceEndpoint is NOT thread safe. Make a copy for each thread.
ServiceEndpoint temp = new ServiceEndpoint(endpoint.Contract, endpoint.Binding, endpoint.Address);
ThreadPool.QueueUserWorkItem(new WaitCallback(CallServiceManyTimes),
new ThreadParms() { ManagedThreadId = i, ServiceEndpoint = temp });
}
_waitHandle.WaitOne();
host.Shutdown();
_done = true;
//Console.WriteLine("All DONE.");
Console.WriteLine(" Type=" + ServiceType.Name + " ThreadCount=" + ThreadCount + " ThreadCallCount=" + ThreadCallCount);
Console.WriteLine(" runtime: " + _stopWatch.ElapsedTicks + " ticks " + _stopWatch.ElapsedMilliseconds + " msec");
}
public void CallServiceManyTimes(object threadParams)
{
ThreadParms p = (ThreadParms)threadParams;
ChannelFactory<ISimple> factory = new ChannelFactory<ISimple>(p.ServiceEndpoint);
ISimple proxy = factory.CreateChannel();
for (int i = 1; i < ThreadCallCount; i++)
{
proxy.Put();
}
((ICommunicationObject)proxy).Shutdown();
factory.Shutdown();
int currentCompleted = Interlocked.Increment(ref _completed);
if (currentCompleted == ThreadCount)
{
_stopWatch.Stop();
_waitHandle.Set();
}
}
}
class Program
{
static void Main(string[] args)
{
BenchmarkService benchmark;
int threadCount = 600;
int threadCalls = 500;
string baseAddress = "net.pipe://localhost/base";
for (int i = 0; i <= 4; i++)
{
benchmark = new BenchmarkService(typeof(SingleService), threadCount, threadCalls);
benchmark.Run(baseAddress);
benchmark = new BenchmarkService(typeof(MultipleService), threadCount, threadCalls);
benchmark.Run(baseAddress);
}
baseAddress = "http://localhost/base";
for (int i = 0; i <= 4; i++)
{
benchmark = new BenchmarkService(typeof(SingleService), threadCount, threadCalls);
benchmark.Run(baseAddress);
benchmark = new BenchmarkService(typeof(MultipleService), threadCount, threadCalls);
benchmark.Run(baseAddress);
}
Console.WriteLine("Press ENTER to close.");
Console.ReadLine();
}
}
public static class Extensions
{
static public void Shutdown(this ICommunicationObject obj)
{
try
{
if (obj != null)
obj.Close();
}
catch (Exception ex)
{
Console.WriteLine("Shutdown exception: {0}", ex.Message);
obj.Abort();
}
}
}
}

Resources