Under what circumstances is it necessary to explicitly release the 'in' ByteBuf passed to the decode() method of ReplayingDecoder in Netty 4.x?
To make it short never... ReplayingDecoder takes care of releasing in if needed. That said if you create a new buffer within the decode method you are responsible to either put it into out or release it.
Related
I'm successfully using a mpsc::channel() to send messages from a producer thread to a consumer.
The consumer is only ever interested in the latest message. (It uses the message from the previous check if there is no new message.)
In consequence, I'm running the consumer's try_recv() in a loop until it fails to get a new message, and then using the last received message, or the old one if no new messages were found.
Memory is being wasted storing old messages which the consumer will throw away.
How would I build a one-element variant of mpsc::channel()?
(I've considered using sync::Mutex<Option<MyMessage>> but it is critical that the consuming thread blocks for as little time as possible. Also, I want ownership to pass from the producer to the consumer.)
You can do it with an AtomicPtr, whose compare_exchange method should compile to a simple cmpxchg instruction, allowing you to store either std::ptr::null or an actual message.
There's quite a few possibilities, with various trade-offs.
I'd recommend the arc-swap crate (see below) for a safe and fast interface, and the DIY Double Buffering approach if performance is that critical.
std::mpsc
There's a second option for std::mpsc: the sync_channel function creates a bounded channel, where the sender blocks when the channel is full, until the receiver picks off a message.
I do not think that it is ideal for your usecase.
Tokio Watch channel
The Tokio ecosystem has the watch channel designed for the purpose of propagating configuration changes.
Unfortunately it is designed for multiple consumers, so the consumers borrow the messages: there is no transfer of ownership.
Arc Swap
I believe the arc-swap crate may be closer to what you need. As the name implies, it provides the moral equivalent of an Atomic<Arc<T>>.
You can use the ArcSwapOption<T> to have the equivalent of an Atomic<Option<Arc<T>>>, and the consumer can simply perform a let new = atomic.swap(None); then check if new is None (nothing new) or Some(Arc<T>) in which case it received an updated configuration.
Do be mindful of the cost of the dropping the previous Arc<T> when swapping a new one in: free is typically more expensive than malloc.
Back to std
You could use an AtomicPtr<T>. It'll require you to use unsafe, and would be a smidgen faster than ArcSwap by virtue of avoiding the reference counting.
It would suffer from the same drop issue, though.
DIY Double Buffering
You could also simply Do It Yourself. A simple double-buffering storage would work.
By storing a plain Option<T>, you avoid the additional extra allocation (and thus extra de-allocation), at the cost of making the check itself slower -- as you may now need to check both buffers. It may be possible to check a single buffer, not clear.
I am working on a custom VCL-only date edit component. Am planning on using the System.SysUtils.FormatDateTime function to convert a TDate to a string. There are two versions of FormatDateTime--one is thread-safe and the other is not. Since the VCL is not thread-safe, should I prefer the thread-safe version or is the non thread-safe version okay to use?
tl;dr use threadsafe version
If you use the non threadsafe version then you constrain any consumer of your component not to use that same non threadsafe version in a thread.
That's not an unreasonable constraint by any measure. Using the non threadsafe version is only really viable in a program that never does so away from the main thread. So a program would have to be breaking the rules in the first place for your component to be caught up in the fallout.
Having said that, a component author should as a principle avoid making any assumptions about the consuming program. So best practice is to call the threadsafe version. Then there can be no debate. Your program cannot be involved in any thread safety issue with these locale global variables.
So long the caller will be the main thread, it doesn't matter if you choose a non thread safe variant of the function. Which in this case seems to be (if you are not creating inside of your component a worker thread from which you were calling that function, and you adhere to the rule that you won't use your control inside any worker thread, you'll be safe with it).
But there's more to consider. If you have recent Delphi version and keep UpdateFormatSettings property enabled, a globally declared format settings variable used in non thread safe overload of the FormatDateTime function will get updated when the user modifies their local settings on their system. I can't say anything about a control notification (so you could update the output) because I'm having only D2009 by hand right now and these changes has been added later.
I am coming from a C# background; as I understand it Swift have automatic memory management like C# does.
An issue in C# that requires the use of “programming patterns” is the timely releasing of resources, as the garbage collector runs at an undefined time, and hence cannot be used to close files, release network connection etc. (Hence IDisposable and the “using” keyword)
How is this dealt with when programming in Swift?
Swift seems to use there same memory management model like Objective-C with ARC enabled.
That means there is no garbage collector. Instead ARC uses reference counting with compiler inserted increment and decrement operations when (strong) references are being set.
The absence of a (threaded) collector means finalization is deterministic in Swift. Objects are deallocated when the last reference goes out of scope.
Swift have the ARC enabled same as Objective-C. You can find the reference from apple documentation.
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AutomaticReferenceCounting.html#//apple_ref/doc/uid/TP40014097-CH20-XID_50
Apart from swift's default handing mentioned by NikolaiRuhe, You can use the deinit() methods to force deallocation of objects. You can use this method to remove observers if you have implemented any.
I'm using [self retain] to hold an object itself, and [self release] to free it elsewhere. This is very convenient sometimes. But this is actually a reference-loop, or dead-lock, which most garbage-collection systems target to solve. I wonder if objective-c's autorelease pool may find the loops and give me surprises by release the object before reaching [self release]. Is my way encouraged or not? How can I ensure that the garbage-collection, if there, won't be too smart?
This way of working is very discouraged. It looks like you need some pointers on memory management.
Theoretically, an object should live as long as it is useful. Useful objects can easily be spotted: they are directly referenced somewhere on a thread stack, or, if you made a graph of all your objects, reachable through some path linked to an object referenced somewhere on a thread stack. Objects that live "by themselves", without being referenced, cannot be useful, since no thread can reach to them to make them perform something.
This is how a garbage collector works: it traverses your object graph and collects every unreferenced object. Mind you, Objective-C is not always garbage-collected, so some rules had to be established. These are the memory management guidelines for Cocoa.
In short, it is based over the concept of 'ownership'. When you look at the reference count of an object, you immediately know how many other objects depend on it. If an object has a reference count of 3, it means that three other objects need it to work properly (and thus own it). Every time you keep a reference to an object (except in rare conditions), you should call its retain method. And before you drop the reference, you should call its release method.
There are some other importants rule regarding the creation of objects. When you call alloc, copy or mutableCopy, the object you get already has a refcount of 1. In this case, it means the calling code is responsible for releasing the object once it's not required. This can be problematic when you return references to objects: once you return it, in theory, you don't need it anymore, but if you call release on it, it'll be destroyed right away! This is where NSAutoreleasePool objects come in. By calling autorelease on an object, you give up ownership on it (as if you called release), except that the reference is not immediately revoked: instead, it is transferred to the NSAutoreleasePool, that will release it once it receives the release message itself. (Whenever some of your code is called back by the Cocoa framework, you can be assured that an autorelease pool already exists.)
It also means that you do not own objects if you did not call alloc, copy or mutableCopy on them; in other words, if you obtain a reference to such an object otherwise, you don't need to call release on it. If you need to keep around such an object, as usual, call retain on it, and then release when you're done.
Now, if we try to apply this logic to your use case, it stands out as odd. An object cannot logically own itself, as it would mean that it can exist, standalone in memory, without being referenced by a thread. Obviously, if you have the occasion to call release on yourself, it means that one of your methods is being executed; therefore, there's gotta be a reference around for you, so you shouldn't need to retain yourself in the first place. I can't really say with the few details you've given, but you probably need to look into NSAutoreleasePool objects.
If you're using the retain/release memory model, it shouldn't be a problem. Nothing will go looking for your [self retain] and subvert it. That may not be the case, however, if you ever switch over to using garbage collection, where -retain and -release are no-ops.
Here's another thread on SO on the same topic.
I'd reiterate the answer that includes the phrase "overwhelming sense of ickyness." It's not illegal, but it feels like a poor plan unless there's a pretty strong reason. If nothing else, it seems sneaky, and that's never good in code. Do heed the warning in that thread to use -autorelease instead of -release.
I have written a managed class that wraps around an unmanaged C++ object, but I found that - when using it in C# - the GC kicks in early while I'm executing a method on the object. I have read up on garbage collection and how to prevent it from happening early. One way is to use a "using" statement to control when the object is disposed, but this puts the responsibility on the client of the managed object. I could add to the managed class:
MyManagedObject::MyMethod()
{
System::Runtime::InteropServices::GCHandle handle =
System::Runtime::InteropServices::GCHandle::Alloc(this);
// access unmanaged member
handle.Free();
}
This appears to work. Being new to .NET, how do other people deal with this problem?
Thank you,
Johan
You might like to take a look at this article: http://www.codeproject.com/Tips/246372/Premature-NET-garbage-collection-or-Dude-wheres-my. I believe it describes your situation exactly. In short, the remedies are either ausing block or a GC.KeepAlive. However, I agree that in many cases you will not wish to pass this burden onto the client of the unmanaged object; in this case, a call to GC.KeepAlive(this) at the end of every wrapper method is a good solution.
You can use GC.KeepAlive(this) in your method's body if you want to keep the finalizer from being called. As others noted correctly in the comments, if your this reference is not live during the method call, it is possible for the finalizer to be called and for memory to be reclaimed during the call.
See http://blogs.microsoft.co.il/blogs/sasha/archive/2008/07/28/finalizer-vs-application-a-race-condition-from-hell.aspx for a detailed case study.