Why is Automapper using so much memory? - memory-leaks

I am using the latest version of Automapper (v3.0.0.0-ci1036) and when it converts an object with binary data, it uses crazy amounts of memory. (200MB for a 10MB file). Here is an example of such a "file" beging converted:
class Program
{
static void Main(string[] args)
{
convertObject();
}
private static void convertObject()
{
var rnd = new Random();
var fileContents = new Byte[1024 * 1024 * 10];
rnd.NextBytes(fileContents);
var attachment = new Attachment { Content = fileContents };
Mapper.CreateMap<Attachment, AttachmentDTO>();
Console.WriteLine("Press enter to convert");
Console.ReadLine();
var dto = Mapper.Map<Attachment, AttachmentDTO>(attachment);
Console.WriteLine(dto.Content.Length + " bytes");
Console.ReadLine();
}
}
public class Attachment
{
public byte[] Content { get; set; }
}
public class AttachmentDTO
{
public byte[] Content { get; set; }
}
Is there something wrong with my code, or do I have to stop using automapper for objects that contain binary data?

I am not sure, but your reason might be the following:
Your C# application works on the .NET runtime which clear the heap memory when possible using the Garbage Collector.
This technique has the side effect to fragment your heap memory. So, for example, you might have 100MB allocated, with the 40% available for new variables that is fragmented in smaller chunks of max 5MB.
In this situation, when you allocate a new array of 10 MB, the .NET virtual machine has not room to allocate it, even if it has 40MB free.
To solve the problem it move up your available heap memory to 110MB (in the best case) and allocate the new 10 MB for your new byte array.
Also see:
http://msdn.microsoft.com/en-us/magazine/dd882521.aspx#id0400035

Related

Memory leak in Camel netty TCP client when consuming lines with Windows line breaks (CR LF)

My Camel netty tcp client consuming text lines seems to have a memory leak but only if the test data lines end with Windows (CR LF) line breaks. I encountered no issues with Unix (LF) line breaks.
I made a short test to demonstrate the issue simulating a tcp server continuously sending test data lines.
With Unix (LF) line breaks in test data I see a throughput of about 3.500 messages/second and steady 180 MB ram use. No issues.
With Windows (CR LF) line breaks in test data I see a throughput starting with 380.000 (woah!) messages/second until hitting my -Xmx4G heap limit after about 30 seconds and slowing down considerably probably because of excessive GC; if given more heap it grows steadily until hitting that limit (tried with -Xmx20G).
The only difference is really the line breaks in my test data...
Am I missing something here?
Using Camel 2.24.0 (which is using netty 4.1.32-Final) on Linux with OpenJDK 1.8.0_192. The problem also occurs with latest netty 4.1.36.Final. Also occurs with OpenJ9 JVM, so does not seem to be JVM specific.
public abstract class MyRouteBuilderTestBase extends CamelTestSupport {
private final int nettyPort = AvailablePortFinder.getNextAvailable();
private ServerSocket serverSocket;
private Socket clientSocket;
private PrintWriter out;
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("netty4:tcp://localhost:" + nettyPort + "?clientMode=true&textline=true&sync=false")
.to("log:throughput?level=INFO&groupInterval=10000&groupActiveOnly=false");
}
};
}
protected void startServerStub(String testdata) throws Exception {
serverSocket = new ServerSocket(nettyPort);
clientSocket = serverSocket.accept();
out = new PrintWriter(clientSocket.getOutputStream(), true);
for (;;) {
out.print(testdata);
}
}
#After
public void after() throws Exception {
if (out != null) out.close();
if (clientSocket != null) clientSocket.close();
if (serverSocket != null) serverSocket.close();
}
}
public class MyRouteBuilderTestUnixLineBreaks extends MyRouteBuilderTestBase {
#Test
public void testUnixLineBreaks() throws Exception {
startServerStub("my test data\n"); // Unix LF
}
}
public class MyRouteBuilderTestWindowsLineBreaks extends MyRouteBuilderTestBase {
#Test
public void testWindowsLineBreaks() throws Exception {
startServerStub("my test data\r\n"); // Windows CR LF
}
}
Heap dump analysis showed that the memory is getting allocated by one instance of io.netty.util.concurrent.DefaultEventExecutor which is using a LinkedBlockingQueue with unlimited size internally. This queue grows indefinitely under load causing the issue.
The DefaultEventExecutor is created by Camel because of the parameter usingExecutorService which is true by default (maybe not a good choice). Setting usingExecutorService=false makes Netty use its event loop instead of the executor which works much better.
I now get 600.000 messages per second throughput with data using Windows line breaks (CR NL) with a steady ram use of about 200mb (-Xmx500M). Nice.
Though with data using Unix line breaks (NL) the throughput is only at about 6.500 messages per second, two orders of magnitude slower, which was still puzzling.
The reason is that Camel creates its own org.apache.camel.component.netty4.codec.DelimiterBasedFrameDecoder class by subclassing Netty's io.netty.handler.codec.DelimiterBasedFrameDecoder -- I don't know why since Camel's class does not add any functionality. But by subclassing, Camel prevents a certain optimization inside Netty's DelimiterBasedFrameDecoder which switches to io.netty.handler.codec.LineBasedFrameDecoder internally, but only if not subclassed.
To overcome this I needed to explicitly declare decoder and encoders using Netty's classes instead, in addition to setting usingExecutorService=false.
Now I get the 600.000 messages per second throughput with data using Unix line breaks (NL) too and see a steady ram use of about 200mb. That looks much better.
public abstract class MyRouteBuilderTestBase extends CamelTestSupport {
private final int nettyPort = AvailablePortFinder.getNextAvailable();
private ServerSocket serverSocket;
private Socket clientSocket;
private PrintWriter out;
#Override
protected JndiRegistry createRegistry() throws Exception {
JndiRegistry registry = super.createRegistry();
List<ChannelHandler> decoders = new ArrayList<>();
DefaultChannelHandlerFactory decoderTextLine = new DefaultChannelHandlerFactory() {
#Override
public ChannelHandler newChannelHandler() {
return new io.netty.handler.codec.DelimiterBasedFrameDecoder(1024, true, Delimiters.lineDelimiter());
// Works too:
// return new LineBasedFrameDecoder(1024, true, true);
}
};
decoders.add(decoderTextLine);
ShareableChannelHandlerFactory decoderStr = new ShareableChannelHandlerFactory(new StringDecoder(CharsetUtil.US_ASCII));
decoders.add(decoderStr);
registry.bind("decoders", decoders);
List<ChannelHandler> encoders = new ArrayList<>();
ShareableChannelHandlerFactory encoderStr = new ShareableChannelHandlerFactory(new StringEncoder(CharsetUtil.US_ASCII));
encoders.add(encoderStr);
registry.bind("encoders", encoders);
return registry;
}
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("netty4:tcp://localhost:" + nettyPort + "?clientMode=true&textline=true&sync=false&usingExecutorService=false&encoders=#encoders&decoders=#decoders")
.to("log:throughput?level=INFO&groupInterval=10000&groupActiveOnly=false");
}
};
}
protected void startServerStub(String testdata) throws Exception {
serverSocket = new ServerSocket(nettyPort);
clientSocket = serverSocket.accept();
out = new PrintWriter(clientSocket.getOutputStream(), true);
for (;;) {
out.print(testdata);
}
}
#After
public void after() throws Exception {
if (out != null) out.close();
if (clientSocket != null) clientSocket.close();
if (serverSocket != null) serverSocket.close();
}
}
Update: The memory usage issue is not a memory leak (and I regret phrasing my question that way) but about buffering. Please consult comments to this answer by users Bedla and Claus Ibsen to get a good understanding of the consequences of the solution outlined above. Please also consult CAMEL-13527

EntryProcessor without locking entries

In my application, I'm trying to process data in IMap, the scenario is as follows:
application recieves request (REST for example) with set of keys to be processed
application processes entries with given key and returns result - map where key is original key of the entry and result is calculated
for this scenario IMap.executeOnKeys is almost perfect, with one problem - the entry is locked while being processed - and really it hurts thruput. The IMap is populated on startup and never modified.
Is it possible to process entries without locking them? If possible without sending entries to another node and without causing network overhead (sending 1000 tasks to single node in for-loop)
Here is reference implementation to demonstrate what I'm trying to achieve:
public class Main {
public static void main(String[] args) throws Exception {
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
IMap<String, String> map = instance.getMap("the-map");
// populated once on startup, never modified
for (int i = 1; i <= 10; i++) {
map.put("key-" + i, "value-" + i);
}
Set<String> keys = new HashSet<>();
keys.add("key-1"); // every requst may have different key set, they may overlap
System.out.println(" ---- processing ----");
ForkJoinPool pool = new ForkJoinPool();
// to simulate parallel requests on the same entry
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("first")));
pool.execute(() -> map.executeOnKeys(keys, new MyEntryProcessor("second")));
System.out.println(" ---- pool is waiting ----");
pool.shutdown();
pool.awaitTermination(5, TimeUnit.MINUTES);
System.out.println(" ------ DONE -------");
}
static class MyEntryProcessor implements EntryProcessor<String, String> {
private String name;
MyEntryProcessor(String name) {
this.name = name;
}
#Override
public Object process(Map.Entry<String, String> entry) {
System.out.println(name + " is processing " + entry);
return calculate(entry); // may take some time, doesn't modify entry
}
#Override
public EntryBackupProcessor<String, String> getBackupProcessor() {
return null;
}
}
}
Thanks in advance
In executeOnKeys the entries are not locked. Maybe you mean that the processing happens on partitionThreads, so that there may be no other processing for the particular key? Anyhow, here's the solution:
Your EntryProcessor should implement:
Offloadable interface -> this means that the partition-thread will be used only for reading the value. The calculation will be done in the offloading thread-pool.
ReadOnly interface -> in this case the EP won't hop on the partition-thread again to save the modification you might have done in the entry. Since your EP does not modify entries, this will increase the performance.

GroovyShell in Java8 : memory leak / duplicated classes [src code + load test provided]

We have a memory leak caused by GroovyShell/ Groovy scripts (see GroovyEvaluator code at the end). Main problems are (copy-paste from MAT analyser):
The class "java.beans.ThreadGroupContext", loaded by "<system class
loader>", occupies 807,406,960 (33.38%) bytes.
and:
16 instances of
"org.codehaus.groovy.reflection.ClassInfo$ClassInfoSet$Segment",
loaded by "sun.misc.Launcher$AppClassLoader # 0x7004e9c80" occupy
1,510,256,544 (62.44%) bytes
We're using Groovy 2.3.11 and Java8 (1.8.0_25 to be exact).
Upgrading to Groovy 2.4.6 doesn't solve the problem. Just improves memory usage a little bit, esp. non-heap.
Java args we're using: -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC
BTW, I've read https://dzone.com/articles/groovyshell-and-memory-leaks. We do set GroovyShell shell to null when it's no longer needed. Using GroovyShell().parse() would probably help but it isn't really an option for us - we have >10 sets, each consisting of 20-100 scripts, and they can be changed at any time (on runtime).
Setting MaxMetaspaceSize should also help, but it doesn't really solve the root problem, doesn't remove the root cause. So I'm still trying to nail it down.
I created load test to recreate the problem (see the code at the end). When I run it:
heap size, metaspace size and number of classes keep increasing
heap dump taken after several minutes is bigger than 4GB
Performance charts for first 3 minutes:
As I've already mentioned I'm using MAT to analyse heap dumps. So let's check Dominator tree report:
Hashmap takes > 30% of the heap.
So let's analyse it further. Let's see what sits inside it. Let's check hash entries:
It reports 38 830 entiries. Including 38 780 entries with keys matching ".class Script."
Another thing, "duplicate classes" report:
We have 400 entries (because load tests defines 400 G.scripts), all for "ScriptN" classes.
All of them holding references to groovyclassloader$innerloader
I've found similar bug reported: https://issues.apache.org/jira/browse/GROOVY-7498 (see comments at the end and attached screenshot) - their problems were solved by upgrading Java to 1.8u51. It didn't do a trick for us though.
Our code:
public class GroovyEvaluator
{
private GroovyShell shell;
public GroovyEvaluator()
{
this(Collections.<String, Object>emptyMap());
}
public GroovyEvaluator(final Map<String, Object> contextVariables)
{
shell = new GroovyShell();
for (Map.Entry<String, Object> contextVariable : contextVariables.entrySet())
{
shell.setVariable(contextVariable.getKey(), contextVariable.getValue());
}
}
public void setVariables(final Map<String, Object> answers)
{
for (Map.Entry<String, Object> questionAndAnswer : answers.entrySet())
{
String questionId = questionAndAnswer.getKey();
Object answer = questionAndAnswer.getValue();
shell.setVariable(questionId, answer);
}
}
public Object evaluateExpression(String expression)
{
return shell.evaluate(expression);
}
public void setVariable(final String name, final Object value)
{
shell.setVariable(name, value);
}
public void close()
{
shell = null;
}
}
Load test:
/** Run using -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC */
public class GroovyEvaluatorLoadTest
{
private static int NUMBER_OF_QUESTIONS = 400;
private final Map<String, Object> contextVariables = Collections.emptyMap();
private List<Fact> factMappings = new ArrayList<>();
public GroovyEvaluatorLoadTest()
{
for (int i=0; i<NUMBER_OF_QUESTIONS; i++)
{
factMappings.add(new Fact("fact" + i, "question" + i));
}
}
private void callEvaluateExpression(int iter)
{
GroovyEvaluator groovyEvaluator = new GroovyEvaluator(contextVariables);
Map<String, Object> factValues = new HashMap<>();
Map<String, Object> answers = new HashMap<>();
for (int i=0; i<NUMBER_OF_QUESTIONS; i++)
{
factValues.put("fact" + i, iter + "-fact-value-" + i);
answers.put("question" + i, iter + "-answer-" + i);
}
groovyEvaluator.setVariables(answers);
groovyEvaluator.setVariable("answers", answers);
groovyEvaluator.setVariable("facts", factValues);
for (Fact fact : factMappings)
{
groovyEvaluator.evaluateExpression(fact.mapping);
}
groovyEvaluator.close();
}
public static void main(String [] args)
{
GroovyEvaluatorLoadTest test = new GroovyEvaluatorLoadTest();
for (int i=0; i<995000; i++)
{
test.callEvaluateExpression(i);
}
test.callEvaluateExpression(0);
}
}
public class Fact
{
public final String factId;
public final String mapping;
public Fact(final String factId, final String mapping)
{
this.factId = factId;
this.mapping = mapping;
}
}
Any thoughts?
Thx in advance
OK, this is my solution:
public class GroovyEvaluator
{
private static GroovyScriptCachingBuilder groovyScriptCachingBuilder = new GroovyScriptCachingBuilder();
private Map<String, Object> variables = new HashMap<>();
public GroovyEvaluator()
{
this(Collections.<String, Object>emptyMap());
}
public GroovyEvaluator(final Map<String, Object> contextVariables)
{
variables.putAll(contextVariables);
}
public void setVariables(final Map<String, Object> answers)
{
variables.putAll(answers);
}
public void setVariable(final String name, final Object value)
{
variables.put(name, value);
}
public Object evaluateExpression(String expression)
{
final Binding binding = new Binding();
for (Map.Entry<String, Object> varEntry : variables.entrySet())
{
binding.setProperty(varEntry.getKey(), varEntry.getValue());
}
Script script = groovyScriptCachingBuilder.getScript(expression);
synchronized (script)
{
script.setBinding(binding);
return script.run();
}
}
}
public class GroovyScriptCachingBuilder
{
private GroovyShell shell = new GroovyShell();
private Map<String, Script> scripts = new HashMap<>();
public Script getScript(final String expression)
{
Script script;
if (scripts.containsKey(expression))
{
script = scripts.get(expression);
}
else
{
script = shell.parse(expression);
scripts.put(expression, script);
}
return script;
}
}
New solution keeps number of loaded classes and Metadata size at a constant level. Non-heap allocated memory usage = ~70 MB.
Also: there is no need to use UseConcMarkSweepGC anymore. You can choose whichever GC you want or stick with a default one :)
Synchronising access to script objects might not the best option, but the only one I found that keeps Metaspace size within reasonable level. And even better - it keeps it constant. Still. It might not be the best solution for everyone but works great for us. We have big sets of tiny scripts which means this solution is (pretty much) scalable.
Let's see some STATS for GroovyEvaluatorLoadTest with GroovyEvaluator using:
old approach with shell.evaluate(expression):
0 iterations took 5.03 s
100 iterations took 285.185 s
200 iterations took 821.307 s
script.setBinding(binding):
0 iterations took 4.524 s
100 iterations took 19.291 s
200 iterations took 33.44 s
300 iterations took 47.791 s
400 iterations took 62.086 s
500 iterations took 77.329 s
So additional advantage is: it's lightning fast compared to previous, leaking solution ;)

Getting the most info from .NET Cache

I'm playing around with the retrieval of the most information I can regarding the use of .NET Caching.
With the Cache object we can retrieve 3 parameters
Count
EffectivePercentagePhysicalMemoryLimit
EffectivePrivateBytesLimit
But how about all the rest?
Where can I get information such as "available memory in server", "used cache memory", and so on...
There was a old project in ASP Allience called Cache Manager, but it's no longer available and all I could find was an image of it, where it does display exactly this:
I was looking at the docs and reading about the new .NET 4 entries in the System.Runtime.Cache like the CacheMemoryLimit and PhysicalMemoryLimit but I can't find real examples on how do I use it...
Does anyone have a wrapper for Cache Info around? or any idea how to use this new methods available?
my current Cache Implementation is:
public class InMemoryCache : ICacheService
{
private int minutes = 15;
public T Get<T>(string cacheID, Func<T> getItemCallback) where T : class
{
T item = HttpRuntime.Cache.Get(cacheID) as T;
if (item == null)
{
item = getItemCallback();
HttpRuntime.Cache.Insert(
cacheID,
item,
null,
DateTime.Now.AddMinutes(minutes),
System.Web.Caching.Cache.NoSlidingExpiration);
}
return item;
}
public void Clear()
{
IDictionaryEnumerator enumerator = HttpRuntime.Cache.GetEnumerator();
while (enumerator.MoveNext())
HttpRuntime.Cache.Remove(enumerator.Key.ToString());
}
public Dictionary<string, string> Stats()
{
var cache = HttpRuntime.Cache;
var r = new Dictionary<string, string>();
r.Add("Count", cache.Count.ToString());
r.Add("EffectivePercentagePhysicalMemoryLimit", cache.EffectivePercentagePhysicalMemoryLimit.ToString());
r.Add("EffectivePrivateBytesLimit", cache.EffectivePrivateBytesLimit.ToString());
return r;
}
}
Take a look at this:
https://www.youtube.com/watch?v=Dz_7hukyejQ
This is based on a 100% managed custom memory manager that stroes cached items in byte[] segments that are like 256 mb in size. This allows to store 100s of millions of objects without slowing anything down as GC does not see the "objects" as they reside in byte[].
The video shows how one can see cache in action and see how many objects,pages,priorities etc...
Here is the code:
https://github.com/aumcode/nfx/tree/master/Source/NFX/ApplicationModel/Pile
Namely the main interface of cache:
https://github.com/aumcode/nfx/blob/master/Source/NFX/ApplicationModel/Pile/ICache.cs
You can have named tables, age-based or absolute timestamp expiration with memory limits and entry priorities

Streamwriter, StringBuilder and Parallel loops

Sorry for big chunk of code, I couldn't explain that with less.Basically I'm trying to write into a file from many tasks.
Can you guys please tell me what I'm doing wrong? _streamWriter.WriteLine() throws the ArgumentOutOfRangeException.
class Program
{
private static LogBuilder _log = new LogBuilder();
static void Main(string[] args)
{
var acts = new List<Func<string>>();
var rnd = new Random();
for (int i = 0; i < 10000; i++)
{
acts.Add(() =>
{
var delay = rnd.Next(300);
Thread.Sleep(delay);
return "act that that lasted "+delay;
});
}
Parallel.ForEach(acts, act =>
{
_log.Log.AppendLine(act.Invoke());
_log.Write();
});
}
}
public class LogBuilder : IDisposable
{
public StringBuilder Log = new StringBuilder();
private FileStream _fileStream;
private StreamWriter _streamWriter;
public LogBuilder()
{
_fileStream = new FileStream("log.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
_streamWriter = new StreamWriter(_fileStream) { AutoFlush = true };
}
public void Write()
{
lock (Log)
{
if (Log.Length <= 0) return;
_streamWriter.WriteLine(Log.ToString()); //throws here. Although Log.Length is greater than zero
Log.Clear();
}
}
public void Dispose()
{
_streamWriter.Close(); _streamWriter.Dispose(); _fileStream.Close(); fileStream.Dispose();
}
}
This is not a bug in StringBuilder, it's a bug in your code. And the modification you shown in your followup answer (where you replace Log.String with a loop that extracts characters one at a time) doesn't fix it. It won't throw an exception any more, but it won't work properly either.
The problem is that you're using the StringBuilder in two places in your multithreaded code, and one of them does not attempt to lock it, meaning that reading can occur on one thread simultaneously with writing occurring on another. In particular, the problem is this line:
_log.Log.AppendLine(act.Invoke());
You're doing that inside your Parallel.ForEach. You are not making any attempt at synchronization here, even though this will run on multiple threads at once. So you've got two problems:
Multiple calls to AppendLine may be in progress simultaneously on multiple threads
One thread may attempt to be calling Log.ToString at the same time as one or more other threads are calling AppendLine
You'll only get one read at a time because you are using the lock keyword to synchronize those. The problem is that you're not also acquiring the same lock when calling AppendLine.
Your 'fix' isn't really a fix. You've succeeded only in making the problem harder to see. It will now merely go wrong in different and more subtle ways. For example, I'm assuming that your Write method still goes on to call Log.Clear after your for loop completes its final iteration. Well in between completing that final iteration, and making the call to Log.Clear, it's possible that some other thread will have got in another call to AppendLine because there's no synchronization on those calls to AppendLine.
The upshot is that you will sometimes miss stuff. Code will write things into the string builder that then get cleared out without ever being written to the stream writer.
Also, there's a pretty good chance of concurrent AppendLine calls causing problems. If you're lucky they will crash from time to time. (That's good because it makes it clear you have a problem to fix.) If you're unlucky, you'll just get data corruption from time to time - two threads may end up writing into the same place in the StringBuilder resulting either in a mess, or completely lost data.
Again, this is not a bug in StringBuilder. It is not designed to support being used simultaneously from multiple threads. It's your job to make sure that only one thread at a time does anything to any particular instance of StringBuilder. As the documentation for that class says, "Any instance members are not guaranteed to be thread safe."
Obviously you don't want to hold the lock while you call act.Invoke() because that's presumably the very work you want to parallelize. So I'd guess something like this might work better:
string result = act();
lock(_log.Log)
{
_log.Log.AppendLine(result);
}
However, if I left it there, I wouldn't really be helping you, because this looks very wrong to me.
If you ever find yourself locking a field in someone else's object, it's a sign of a design problem in your code. It would probably make more sense to modify the design, so that the LogBuilder.Write method accepts a string. To be honest, I'm not even sure why you're using a StringBuilder here at all, as you seem to use it just as a holding area for a string that you immediately write to a stream writer. What were you hoping the StringBuilder would add here? The following would be simpler and doesn't seem to lose anything (other than the original concurrency bugs):
public class LogBuilder : IDisposable
{
private readonly object _lock = new object();
private FileStream _fileStream;
private StreamWriter _streamWriter;
public LogBuilder()
{
_fileStream = new FileStream("log.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
_streamWriter = new StreamWriter(_fileStream) { AutoFlush = true };
}
public void Write(string logLine)
{
lock (_lock)
{
_streamWriter.WriteLine(logLine);
}
}
public void Dispose()
{
_streamWriter.Dispose(); fileStream.Dispose();
}
}
I think the cause is because you are accessing the stringBuilder in the Parellel bracket
_log.Log.AppendLine(act.Invoke());
_log.Write();
and inside the LogBuilder you perform lock() to disallow memory allocation on stringBuidler. You are changing the streamwriter to handle the log in every character so would give the parellel process to unlock the memory allocation to stringBuilder.
Segregate the parallel process into distinct action would likely reduce the problem
Parallel.ForEach(acts, act =>
{
_log.Write(act.Invoke());
});
in the LogBuilder class
private readonly object _lock = new object();
public void Write(string logLines)
{
lock (_lock)
{
//_wr.WriteLine(logLines);
Console.WriteLine(logLines);
}
}
An alternate approach is to use TextWriter.Synchronized to wrap StreamWriter.
void Main(string[] args)
{
var rnd = new Random();
var writer = new StreamWriter(#"C:\temp\foo.txt");
var syncedWriter = TextWriter.Synchronized(writer);
var tasks = new List<Func<string>>();
for (int i = 0; i < 1000; i++)
{
int local_i = i; // get a local value, not closure-reference to i
tasks.Add(() =>
{
var delay = rnd.Next(5);
Thread.Sleep(delay);
return local_i.ToString() + " act that that lasted " + delay.ToString();
});
}
Parallel.ForEach(tasks, task =>
{
var value = task();
syncedWriter.WriteLine(value);
});
writer.Dispose();
}
Here are some of the synchronization helper classes
http://referencesource.microsoft.com/#q=Synchronized
System.Collections
static ArrayList Synchronized(ArrayList list)
static IList Synchronized(IList list)
static Hashtable Synchronized(Hashtable table)
static Queue Synchronized(Queue queue)
static SortedList Synchronized(SortedList list)
static Stack Synchronized(Stack stack)
System.Collections.Generic
static IList Synchronized(List list)
System.IO
static Stream Synchronized(Stream stream)
static TextReader Synchronized(TextReader reader)
static TextWriter Synchronized(TextWriter writer)
System.Text.RegularExpressions
static Match Synchronized(Match inner)
static Group Synchronized(Group inner)
It is seems that it isn't problem of Parallelism. It's StringBuilder's problem.
I have replaced:
_streamWriter.WriteLine(Log.ToString());
with:
for (int i = 0; i < Log.Length; i++)
{
_streamWriter.Write(Log[i]);
}
And it worked.
For future reference: http://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=VS.100).aspx
Memory allocation section.

Resources