When using threads in PowerShell, are we able to use the lock statement like in C#?
Or do we need to use the code that lock gets pre compiled to, ie. use the Monitor class?
There is no native lock statement in PowerShell as such, but you can acquire\release an exclusive lock on a specified object using Monitor Class. It can be used to pass data between threads when working with Runspaces, which is demonstrated in David Wyatt's blog post Thread Synchronization (in PowerShell?).
Quote:
MSDN page for ICollection.IsSynchronized Property mentions that you must explicitly lock the SyncRoot property of an Collection to perform a thread-safe enumeration of its contents, even if you're dealing with a Synchronized collection.
Basic example:
# Create synchronized hashtable for thread communication
$SyncHash = [hashtable]::Synchronized(#{Test='Test'})
try
{
# Lock it
[System.Threading.Monitor]::Enter($SyncHash)
$LockTaken = $true
foreach ($keyValuePair in $SyncHash.GetEnumerator())
{
# Hashtable is locked, do something
$keyValuePair
}
}
catch
{
# Catch exception
throw 'Lock failed!'
}
finally
{
if ($LockTaken)
{
# Release lock
[System.Threading.Monitor]::Exit($SyncHash)
}
}
David has also written fully functional Lock-Object module, which implements this approach.
Related
A common design pattern is to have a "manager" object that maintains a set of "managed" objects. In C++11 and later, the Manager likely keeps shared_ptrs to the Managed objects. If the Managed objects need a reference back to the Manager, they wisely do so by storing a weak_ptr<Manager>. The Manager can establish this relationship itself by constructing each Managed object directly (through a factory function, for example), and passing its own shared_ptr to the Managed object. The Manager can obtain its own shared_ptr by using shared_from_this(). None of these choices are required, but they are common and reasonable.
Now consider a Manager that maintains its Managed objects in a separate thread. A user of the Manager-Managed system may ask the Manager to create Managed objects, then run() the Manager so that it maintains those objects in the background until stop() is called. Still seems perfectly reasonable, right?
But now consider the Manager's destructor. It would be a nasty error to allow its background thread to continue past destruction. So we call stop() from the destructor.
Yet this raises a serious issue. Because the Manager is owned by shared_ptrs, its destructor will be called precisely when no shared_ptr references it. At that point, all weak_ptrs to the Manager will be expired(). Therefore all of the Managed objects' manager pointers will be invalid. And since the Managed objects are being "worked" (their member functions called) in a separate thread, they may suddenly find themselves with a null manager. If they assume their manager is non-null, the result is an error of one (severe) kind or another.
I see three potential solutions to the problem.
Add explicit checks for non-null manager everywhere it's used in Managed object code. Yet depending on the complexity of Managed objects, these checks are likely to be fiddly and error-prone.
Ensure that stop() is called prior to the manager being destroyed. Yet this violates the semantics of shared_ptr. There is no single owner of Manager: it is shared. So no single object knows when it will die or when it should stop updating. Moreover, it's simply bad form to leave Manager's destructor without a call to stop(): RAII implies that the Manager must deal with its own thread.
Make the Manager detect its own imminent death and stop calling Managed objects when it's dying. This has the benefit of centralizing the burden: the Manager should be able to detect its own death in a few places (for example, loops over all Managed objects) and refuse to deal with Managed objects in those places. Since the Managed objects won't be called, they won't attempt to use their expired weak_ptr<Manager>s and therefore won't fail (or need to check them constantly).
Is there a standard, correct way of dealing with this problem? Is the problem as I've framed it in violation of some well-understood principle for design, use of shared_ptr/weak_ptr, or use of threads?
The following code illustrates the problem.
#include <memory>
#include <thread>
#include <vector>
#include <cassert>
using namespace std;
class Manager;
class Managed {
public:
explicit Managed( shared_ptr< Manager > manager )
: m_manager( manager )
{}
void doStuff() {
// Fails because Manager::work() may be continuing to traverse Managed objects in a separate thread
// while m_manager is in its destructor (and therefore dead).
assert( m_manager.expired() == false );
// ...
}
private:
weak_ptr< Manager > m_manager;
};
class Manager : public enable_shared_from_this< Manager > {
public:
~Manager() {
stop(); // Problematic: all weak_ptrs to me are now expired(), yet work() continues a moment.
}
shared_ptr< Managed > create() {
assert( !m_thread.joinable() ); // Mustn't be running, to avoid concurrency issues.
auto managed = make_shared< Managed >( shared_from_this() );
m_managed.push_back( managed );
return managed;
}
void run() {
m_continue = true;
m_thread = thread{ bind( &Manager::work, this ) };
}
void stop() {
m_continue = false;
if( m_thread.joinable() ) {
m_thread.join();
}
}
private:
vector< shared_ptr< Managed >> m_managed;
thread m_thread;
atomic_bool m_continue{ true };
void work() {
while( m_continue ) {
for( const auto& managed : m_managed ) {
managed->doStuff();
}
}
}
};
int main() {
// Create the manager and a bunch of managed objects.
auto manager = make_shared< Manager >();
for( size_t i = 0; i < 10000; ++i ) {
manager->create();
}
// Run for a while.
manager->run();
this_thread::sleep_for( chrono::seconds{ 1 } );
manager.reset(); // Calls manager->stop() indirectly.
return 0;
}
Shared pointers do not solve every resource problem. They solve one particular one that happens to be easy to fix: blindly using shared pointers causes more problems than it fixes in my experience.
Shared pointers are about distributing the right to extend the lifetime of some object to an unbounded set of clients. Clients who hold weak pointers are people who want to be able to passively know when the object has gone away. This means they must always check, and once it goes away they have no rights to get it back.
If you have singular ownership, then use a unique pointer not a shared pointer.
If you guarantee the workers do not outlive the manager, give them a raw pointer not a weak pointer.
If you want to clean up before you destroy the object, give the unique pointer a custom deleter that does cleanup before delete.
Then, by delete, your workers should all be gone. Assert, and abort if you fail.
If you want non-unique-pointer semenatics, wrap up your actual manager as a unique pImpl within a wrapper type that offers pseudo-value semantics, and either use a custom deleter or just have the wrapper's destructor call the pre-destruction cleanup code.
shared_from_this is no longer involved.
I'm not very experienced with this topic so forgive me if this isn't very clear.
I've created a Portable Class Library that has an ObservableCollection of Sections, and each secion has an ObservableCollection of Items.
Both of these collections are bound to the UI of separate Win8 and WP8 apps.
I'm trying to figure out the correct way to populate these collections correctly so that the UI gets updated from the PCL class.
If the class was inside the win8 project I know I could do something like Dispatcher.BeginInvoke, but this doesn't translate to the PCL, nor would I be able to reuse that in the WP8 project.
In this thread (Portable class library equivalent of Dispatcher.Invoke or Dispatcher.RunAsync) I discovered the SynchroniationContext class.
I passed in a reference to the main app's SynchroniationContext, and when I populate the sections I can do so because it's only the one object being updated:
if (SynchronizationContext.Current == _synchronizationContext)
{
// Execute the CollectionChanged event on the current thread
UpdateSections(sections);
}
else
{
// Post the CollectionChanged event on the creator thread
_synchronizationContext.Post(UpdateSections, sections);
}
However, when I try to do the same thing with articles, I have to have a reference to both the section AND the article, but the Post method only allows me to pass in a single object.
I attempted to use a lambda expression:
if (SynchronizationContext.Current == _synchronizationContext)
{
// Execute the CollectionChanged event on the current thread
section.Items.Add(item);
}
else
{
// Post the CollectionChanged event on the creator thread
_synchronizationContext.Post((e) =>
{
section.Items.Add(item);
}, null);
}
but I'm guessing this is not correct as I'm getting an error about being "marshalled for a different thread".
So where am I going wrong here? how can I update both collections correctly from the PCL so that both apps can also update their UI?
many thanks!
Hard to say without seeing the rest of the code but I doubt is has anything to do with Portable Class Libraries. It would be good to see the details about the exception (type, message and stack trace).
The way you call Post() with more than argument looks correct. What happens if you remove the if check and simply always go through SynchronizationContext.Post()?
BTW: I don't explicitly pass in the SynchronizationContext. I assume that the ViewModel is created on the UI Thread. This allows me to capture it like this:
public class MyViewModel
{
private SynchronizationContext _context = SynchronizationContext.Current;
}
I would recommend that at least in your ViewModels, all publicly observable state changes (ie property change notifications and modifications to ObservableCollections) happen on the UI thread. I’d recommend doing the same thing with your model state changes, but it might make sense to let them make changes on different threads and marshal those changes to the UI thread in your ViewModels.
To do this, of course, you need to be able to switch to the UI thread in portable code. If SynchronizationContext isn’t working for you, then just create your own abstraction for the dispatcher (ie IRunOnUIThread).
The reason you were getting the "marshalled on a different thread" error is that you weren't passing the item to add to the list as the "state" object on the Post(action, state) method.
Your code should look like this:
if (SynchronizationContext.Current == _synchronizationContext)
{
// Execute the CollectionChanged event on the current thread
section.Items.Add(item);
}
else
{
// Post the CollectionChanged event on the creator thread
_synchronizationContext.Post((e) =>
{
var item = (YourItemnType) e;
section.Items.Add(item);
}, item);
}
If you make that change, your code will work fine from a PCL.
So I'm trying to use the TPL features in .NET 4.0 and have some code like this (don't laugh):
/// <summary>Fetches a thread along with its posts. Increments the thread viewed counter.</summary>
public Thread ViewThread(int threadId)
{
// Get the thread along with the posts
Thread thread = this.Context.Threads.Include(t => t.Posts)
.FirstOrDefault(t => t.ThreadID == threadId);
// Increment viewed counter
thread.NumViews++;
Task.Factory.StartNew(() =>
{
try {
this.Context.SaveChanges();
}
catch (Exception ex) {
this.Logger.Error("Error viewing thread " + thread.Title, ex);
}
this.Logger.DebugFormat(#"Thread ""{0}"" viewed and incremented.", thread.Title);
});
return thread;
}
So my immediate concerns with the lambda are this.Context (my entity framework datacontext member), this.Logger (logger member) and thread (used in the logger call). Normally in the QueueUserWorkItem() days, I would think these would need to be passed into the delegate as part of a state object. Are closures going to be bail me out of needing to do that?
Another issue is that the type that this routine is in implements IDisposable and thus is in a using statement. So if I do something like...
using (var bl = new ThreadBL()) {
t = bl.ViewThread(threadId);
}
... am I going to create a race between a dispose() call and the TPL getting around to invoking my lambda?
Currently I'm seeing the context save the data back to my database but no logging - no exceptions either. This could be a configuration thing on my part but something about this code feels odd. I don't want to have unhandled exceptions in other threads. Any input is welcome!
As for your question on closures, yes this is exactly what closures are about. You don't worry about passing state, instead it is captured for you from any outer context and copied onto a compiler supplied class which is also where the closure method will be defined. The compiler does a lot of magic here to make you're life simple. If you want to understand more I highly recommend picking up Jon Skeet's C# in Depth. The chapter on closures is actually available here.
As for your specific implementation, it will not work mainly for the exact problem you mentioned: the Task will be scheduled at the end of ViewThread, but potentially not execute before your ThreadBL instance is disposed of.
Here is my problem , I have created a SortableCollection : ObservableCollection
and added a sort method (sort colors).
When I sort The collection with the principal Thread , it works every thing is fine and works
But When I try to sort this customCollection by using an item in the collection I have an expetion : (The calling thread cannot access this object because a different thread owns it).
I have looked in web and I found several solution , One Solution
This type of solution put the collection multithread for insertion , removing moving operation.
But not for the custom sort.
Thanks for help,
WPF classes have thread affinity. What this means is that all changes to those objects must be in the same thread where they were created. It truly is difficult to create a user interface API that is thread-safe, so Microsoft chose to keep it singlethreaded and force run-time checking to make sure of it.
That said, there are a few options you have to perform your sort in a background thread, and then apply it in the UI thread. The first option is to copy your SortableCollection into a plain old List or Array and perform the sort in the background. Once the background thread is complete, you use a Dispatcher to execute code in the UI thread. Every UI element in WPF extends System.Windows.Threading.DispatcherObject and most extend System.Windows.Freezable. The DispatcherObject is where you get the Dispatcher to execute code in the UI thread.
Logically, the execution would be something like this:
public void BackgroundSort()
{
List<T> items = new List<T>(this.ToArray());
BackgroundSortDelegate del = Sort;
del.BeginInvoke(SortCompleted, del);
}
private void SortCompleted(IAsyncResult result)
{
BackgroundSortDelegate del = result.AsyncState as BackgroundSortDelegate;
List<T> items = del.EndInvoke(result);
this.Dispatcher.Invoke(()=>{this.Collection = items;});
}
The short explanation of what happened is that the background worker/delegate is using a copy of the items in this list. Once the sort is complete, we are calling the Dispatcher object and invoking an action. In that action we are assigning the new sorted list back to our object.
The key to assigning the result of any background work within the UI thread is to use the UI's Dispatcher object. There's actually probably a half dozen ways to invoke a background worker in C#, but the approach to get your work in a background thread into the UI thread is the same.
I have an application that runs multiple threads which are sometimes cancelled. These threads may call into another object that internally accesses a resources (socket). To prevent the resource to be accessed simultaneously, there is a critical section to get some order in the execution.
Now, when cancelling the thread, it (sometimes) happens that the thread is just within that code that is blocked by the critical section. The critical section is locked using an object and I was hoping that upon cancellation of the thread this object would be destructed and consequently release the lock. However this does not seem to be the case, so that at thread destruction this resource object is permanently locked.
Changing the resource object is probably not an option (3rd party delivered), plus it makes sense to prevent simultaneous access to a resource that can not be used in parallel.
I have experimented with preventing the thread to be cancelled using pthread_setcancelstate when the section is locked/unlocked, however this does feel a bit dirty and would not be a final solution for other situations (e.g. aquired mutexes, etc).
I know that a prefered solution would be to not use pthread_cancel but instead set a flag in the thread and it would cancel itself when it is ready (in a clean way). However as I want to cancel the thread asap, I was wondering (also out of academic interest) if there would be other options to do that.
Thread cancellation without the help from the application (the mentioned flag) is a bad idea. Just google.
Actually cancellation is so hard that it has been omitted from the latest C++0x draft. You can search http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2497.html and won't find any mention of cancellation at all. Here's the definition of the proposed thread class (you won't find cancel there):
class thread
{
public:
// types:
class id;
typedef implementation-defined native_handle_type; // See [thread.native]
// construct/copy/destroy:
thread();
template <class F> explicit thread(F f);
template <class F, class ...Args> thread(F&& f, Args&&... args);
~thread();
thread(const thread&) = delete;
thread(thread&&);
thread& operator=(const thread&) = delete;
thread& operator=(thread&&);
// members:
void swap(thread&&);
bool joinable() const;
void join();
void detach();
id get_id() const;
native_handle_type native_handle(); // See [thread.native]
// static members:
static unsigned hardware_concurrency();
};
You could use pthread_cleanup_push() to push a cancellation cleanup handler onto the threads cancellation cleanup stack. This handler would be responsible for unlocking the critical section.
Once you leave the critical section you should call pthread_cleanup_pop(0) to remove it.
i.e.
CRIITICAL_SECTION g_section;
void clean_crit_sec( void * )
{
LeaveCriticalSection( &g_section )
}
void *thrfunc( void * )
{
EnterCriticalSection( &g_section );
pthread_cleanup_push( clean_crit_sec, NULL );
// Do something that may be cancellable
LeaveCriticalSection( &g_section );
pthread_cleanup_pop( 0 );
}
This would still leave a small race condition where the critcial section has been unlocked but the cleanup handler could still be executed if the thread was canceled between the Leave.. and the cleanup_pop.
You could call pthread_cleanup_pop with 1 which would execute your cleanup code and not levae the critical section yourself. i.e
CRIITICAL_SECTION g_section;
void clean_crit_sec( void * )
{
LeaveCriticalSection( &g_section )
}
void *thrfunc( void * )
{
EnterCriticalSection( &g_section );
pthread_cleanup_push( clean_crit_sec, NULL );
// Do something that may be cancellable
pthread_cleanup_pop( 1 ); // this will pop the handler and execute it.
}
The idea of aborting threads without using a well defined control method (ie, flags) is just so evil that you simply shouldn't do it.
If you have third party code that you have no option except to do this, I might go as far as suggest to abstract the horrible code inside a process, and then interact with the process instead, separating each such component nicely.
Now, such a design would be even worse on windows, because windows is not good at running multiple processes, however this is not such a bad idea on linux.
Of course, having a sensible design for your threaded modules would be even better...
(Personally, I prefer not using threads at all, and always using processes, or non-blocking designs)
if the lock which controls the critical section is not exposed to you directly, there is not much you can do. When you cancel a thread, all the cleanup handlers for the thread are executed in the normal reverse order, but of course these handlers could only release mutexes which you have access to. So you really can't do much more than disable canceling during your visit to the 3rd party component.
I think your best solution is to both use a flag and the pthread_cancel functionality. WHen you are entering the 3rd party component, disable cancel processing (PTHREAD_CANCEL_DISABLE); when you get back out of it, re-enable it. After re-enabling it, check for the flag:
/* In thread which you want to be able to be canceled: */
int oldstate;
pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &oldstate);
... call 3rd party component ...
pthread setcancelstate(oldstate, NULL);
if (cancelled_flag) pthread_exit(PTHREAD_CANCELED);
/* In the thread canceling the other one. Note the order of operations
to avoid race condition: */
cancelled_flag = true;
pthread_cancel(thread_id);