can pthread_mutexattr_setrobust apply to pthread_rwlock_t? - linux

the robustness of mutex is very important to my program since it can handle the case when a process died without releasing the mutex.
But according to the document, pthread_mutexattr_setrobust only apply to pthread_mutex_t, instead of pthread_rwlock_t, is there any approach to set the robustness of pthread_rwlock_t? Or its implementation is robust by default?

according to the document, pthread_mutexattr_setrobust only apply to pthread_mutex_t
More precisely, pthread_mutexattr_setrobust() sets a property of a pthread_mutexattr_t object, and these are used (only) for configuring objects of type pthread_mutex_t. This happens at initialization of the mutex via pthread_mutex_init().
The corresponding initialization function for read/write locks is pthread_rwlock_init(), and its documentation shows that the corresponding attribute object type, accepted by that function, is pthread_rwlockattr_t. Implementations may provide whatever properties they like as extensions, but the only one specified for this type by the current version of POSIX is pshared. Thus no, there is no (portable) robustness option for pthreads read/write locks.

Related

DirectX11 resource Release Multi-Threading

I've read the https://learn.microsoft.com/en-us/windows/desktop/direct3d11/overviews-direct3d-11-render-multi-thread-intro
And it states that I can make calls to ID3D11Device from multiple threads (unless D3D11_CREATE_DEVICE_SINGLETHREADED was used), but calls to ID3D11DeviceContext have to be surrounded with a critical section.
I haven't found any information about releasing resources, using their 'Release' method, for resources such as textures, render targets, vertex/index buffers, shaders.
ID3D11Texture2D, ID3D11Texture3D, ID3D11ShaderResourceView, ID3D11RenderTargetView, ID3D11DepthStencilView
ID3D11Buffer.
ID3D11VertexShader, ID3D11HullShader, ID3D11DomainShader, ID3D11PixelShader.
1) Can I call 'Release' for those resources at any time from any thread without using critical sections while they ARE NOT in use by the render thread's ID3D11DeviceContext?
2) Can I call 'Release' for those resources from other threads even while they ARE in use by ID3D11DeviceContext in the render thread?
Or do I need to surround the Release calls with the same critical section used for accessing ID3D11DeviceContext?
Generally the internal implementation of COM reference counts is done in a thread-safe manner (atomic increments/decrements), so it's safe to call AddRef and Release from multiple threads.
Of course, if the refcount goes to 0 then you have an object destruction so it's important that if you have multiple threads using the same resource, it has the appropriate number of reference counts to keep it live. In Direct3D, object destruction is typically deferred destruction so the actual object cleanup may not happen for a few frames, but you should still keep a non-zero refcount if anyone is referencing it.
Direct3D 11 uses the same rules as Direct3D 10. It uses 'weak references' for the pipeline set methods, so just having a resource set on the device context is not sufficient to increase it's reference count. IOW: if you have two threads both rendering with the same resource, then each thread must hold a reference count on the object to keep it 'live' whether or not it's 'actively set' on a device context at any given moment.
It works this way to avoid the overhead of constantly increment/decrementing reference counts every rendering frame. In Direct3D 9 this was happening thousands of times a frame or more.
Also, if the ID3D11Device reaches a zero ref-count, it and all it's child objects are released regardless of the individual device-child reference counts.
See Microsoft Docs.
The best answer is to use a smart-pointer like Microsoft::WRL::ComPtr and have each thread using a given resource have it's own ComPtr pointing to that resource. That way the only real special-case you'll have is when doing device tear-down (such as responding to a DXGI_ERROR_DEVICE_REMOVED or doing a 'clean exit').

Difference between std::mutex lock function and std::lock_guard<std::mutex>?

Basically, the title is self-explanatory.
I use it in following way:
The code is in Objective-C++.
Objective-C classes make concurrent calls to different purpose functions.
I use std::mutex to lock and unlock std::vector<T> editing option across entire class, as C++ std containers are not thread safe.
Using lock_guard automatically unlocks the mutex again when it goes out of scope. That makes it impossible to forget to unlock it, when returning, or when an exception is thrown. You should always prefer to use lock_guard or unique_lock instead of using mutex::lock(). See http://kayari.org/cxx/antipatterns.html#locking-mutex
lock_guard is an example of an RAII or SBRM type.
The std::lock_guard is only used for two purposes:
Automate mutex unlock during destruction (no need to call .unlock()).
Allow simultaneous lock of multiple mutexes to overcome deadlock problem.
For the last use case you will need std::adopt_lock flag:
std::lock(mutex_one, mutex_two);
std::lock_guard<std::mutex> lockPurposeOne(mutex_one, std::adopt_lock);
std::lock_guard<std::mutex> lockPurposeTwo(mutex_two, std::adopt_lock);
On the other hand, you will need allocate yet another class instance for the guard every time you need to lock the mutex, as std::lock_guard has no member functions. If you need guard with unlocking functionality take a look at std::unique_lock class. You may also consider using std::shared_lock for parallel reading of your vector.
You may notice, that std::shared_lock class is commented in header files and will be only accessible with C++17. According to header file you can use std::shared_timed_mutex, but when you will try to build the app it will fail, as Apple had updated the header files, but not the libc++ itself.
So for Objective-C app it may be more convenient to use GCD, allocate a couple of queue for all your C++ containers at the same time and put semaphores where needed. Take a look at this excellent comparison.

Is FormatDateTime thread safe when using the same copy of TFormatSettings across multiple threads?

I've read a lot about thread safety when reading variable simultanously from multiple threads but I am still not sure whether my case is fine or not.
Consider that I have:
const
MySettings: TFormatSettings =
(
CurrencyFormat : 0;
NegCurrFormat : 0;
ThousandSeparator: ' ';
DecimalSeparator : '.';
CurrencyString : 'ยค';
ShortDateFormat : 'MM/dd/yyyy';
LongDateFormat : 'dddd, dd MMMM yyyy';
//All fields of record are initialized.
);
Can I use FormatDateTime('dd/mm/yyyy hh:nn:ss', MySettings, Now) in multiple threads without worries or should I spawn a separate copy of MySettings for each thread?
This scenario is threadsafe if and only if the format settings record is not modified during the simultaneous calls to formatting functions.
Indeed the old school formatting functions that used a shared global format settings record were threadsafe if and only if the shared object was not modified. This is the key point. Is the format settings object modified or not?
My take on all this is that you should avoid modifying format settings objects. Initialise them and then never modify them. That way you never have thread safety issues.
Yes this is perfectly safe.
As long as MySetting is not changed this is the way to use FormatDateTime and other similar procedures.
From documentation, System.SysUtils.TFormatSettings:
A variable of type TFormatSettings defines a thread-safe context that formatting functions can use in place of the default global context, which is not thread-safe.
N.B. You must provide this thread-safe context by programming. It is thread-safe only if you ensure that the parameter and its shares is not changed during execution.
Typically my serializing libraries are using a shared constant format setting variable, which provides a stable read/write interface in all locales.

Thread safety for arrays in D?

Please bear with me on this as I'm new to this.
I have an array and two threads.
First thread appends new elements to the array when required
myArray ~= newArray;
Second thread removes elements from the array when required:
extractedArray = myArray[0..10];
myArray = myArray[10..myArray.length()];
Is this thread safe?
What happens when the two threads interact on the array at the exact same time?
No, it is not thread-safe. If you share data across threads, then you need to deal with making it thread-safe yourself via facilities such as synchronized statements, synchronized functions, core.atomic, and mutexes.
However, the other major thing that needs to be pointed out is that all data in D is thread-local by default. So, you can't access data across threads unless it's explicitly shared. So, you don't normally have to worry about thread safety at all. It's only when you explicitly share data that it's an issue.
this is not thread safe
this has the classic lost update race:
appending means examening the array to see if it can expand in-place, if not it needs to make a (O(n) time) copy while the copy is busy the other thread can slice of a piece and when the copy is done that piece will return
you should look into using a linked list implementation which are easier to make thread safe
Java's ConcurrentLinkedQueue uses the list described here for it's implementation and you can implement it with the core.atomic.cas() in the standard library
It is not thread-safe. The simplest way to fix this is to surround array operations with the synchronized block. More about it here: http://dlang.org/statement.html#SynchronizedStatement

what is the "attribute" of a pthread mutex?

The function pthread_mutex_init allows you to specify a pointer to an attribute. But I have yet to find a good explanation of what pthread attributes are. I have always just supplied NULL. Is there a use to this argument?
The documentation, for those of you who forget it:
PTHREAD_MUTEX_INIT(3) BSD Library
Functions Manual
PTHREAD_MUTEX_INIT(3)
NAME
pthread_mutex_init -- create a mutex
SYNOPSIS
#include <pthread.h>
int
pthread_mutex_init(pthread_mutex_t *restrict mutex,
const pthread_mutexattr_t *restrict attr);
DESCRIPTION
The pthread_mutex_init() function creates a new mutex, with attributes
specified
with attr. If attr is NULL, the default attributes are used.
The best place to find that information is from the POSIX standards pages.
A NULL mutex attribute gives you an implementation defined default attribute. If you want to know what you can do with attributes, check out the following reference and follow the pthread_mutexattr_* links in the SEE ALSO section. Usually, the default is a sensible set of attributes but it may vary between platforms, so I prefer to explicitly create mutexes with known attributes (better for portability).
This is for issue 7 of the standard, 1003.1-2008. The starting point for that is here. Clicking on Headers in the bottom left will allow you to navigate to the specific functionality (including pthreads.h).
The attributes allow you to set or get:
the type (deadlocking, deadlock-detecting, recursive, etc).
the robustness (what happens when you acquire a mutex and the original owner died while possessing it).
the process-shared attribute (for sharing a mutex across process boundaries).
the protocol (how a thread behaves in terms of priority when a higher-priority thread wants the mutex).
the priority ceiling (the priority at which the critical section will run, a way of preventing priority inversion).
And, for completeness, there's the init and destroy calls as well, not directly related to a specific attribute but used to create them.
All mutex attributes are set in a mutex attribute object by a function of the form:
int pthread_mutexattr_setname(pthread_attr_t *attr, Type t);
All mutex attributes are retrieved from a mutex attribute object by a function of the form:
int pthread_mutexattr_getname(const pthread_attr_t *attr, Type *t);
where name and Type are defined as in the table below:
Type and Name Description and Value(s)
int protocol Define the scheduling classes for mutex locks
PTHREAD_PRIO_NONE,PTHREAD_PRIO_PROTECT,
PTHREAD_PRIO_INHERIT
int pshared Defines whether a mutex is shared with other processes.
PTHREAD_PROCESS_SHARED, PTHREAD_PROCESS_PRIVATE
int prioceiling Used for mutex attribute priority ceiling values.
See POSIX.1 section 13
int type Application defined mutex locking
PTHREAD_MUTEX_NORMAL,PTHREAD_MUTEX_RECURSIVE,
PTHREAD_MUTEX_ERRORCHECK,PTHREAD_MUTEX_DEFAULT
If you scroll down the function listing for <pthread.h>, you will find a bunch of pthread_mutexattr_... functions, including an init, destroy and functions to set various attributes of a mutex. When you pass NULL, the mutex is created with suitable defaults for all these attributes, but if you need to modify specific attributes, you can construct a pthread_mutexattr_t structure and pass it in.
Applying NULL to this argument implies using the default argument.
So for some reasons you could want to change these default settings (using pthread_mutexattr_init).
The documentation explains all you need about these mutex settings.

Resources