UIManagedDocument documentState == 5, undefined state constant - core-data

While working with Core Data, I have found that my UIManagedDocument object has a documentState equal to 5. The UIDocument documentation only defines these constants:
enum { UIDocumentStateNormal = 0,
UIDocumentStateClosed = 1 << 0,
UIDocumentStateInConflict = 1 << 1,
UIDocumentStateSavingError = 1 << 2,
UIDocumentStateEditingDisabled = 1 << 3 }; typedef NSInteger UIDocumentState;
Which would be 0, 1, 2, 4 and 8. 5 might be a special state that UIManagedDocument uses, but I can't find it documented anywhere. The state seems to occur when the Core Data schema is changed. I don't know what the state means. I usually get the error: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'This NSPersistentStoreCoordinator has no persistent stores. It cannot perform a save operation.', which makes sense because the document needs to be opened in a normal state in order to be used as the persistent store.
Right now I am just checking for the state being equal to 5 and deleting the persistent store and recreating it when this occurs. But once my app is live and storing user data, I won't want to be doing this. I haven't looked into best practices for migrating Core Data schemas, but it seems kind of messy too to be checking for managedDocument.documentState == 5 in my code. Is there not any documentation on this document state anywhere?
Update: Now that I'm looking at it, it would make sense that the reason these constants are defined the way they are is so that they can be bitwise ored together as masks. So a documentState equal to 5 would imply it is both UIDocumentStateClosed as well as UIDocumentStateSavingError. These errors are pretty general though. How would I go about narrowing the root cause?
Also, all the example code I've seen for checking for these document states show checking for equality, i.e. if (managedDocument.documentState == UIDocumentStateClosed), but this would imply this is not correct and should rather be checked with a bitwise and, i.e. if (managedDocument.documentState & UIDocumentStateClosed).

I do not have much experience with UIDocument, but it seems to me that the document state
is a bit mask that is the combination of several states, so that documentState == 5 == 1 + 4 means UIDocumentStateClosed + UIDocumentStateSavingError.
In the Document-Based App Programming Guide for iOS you see that documentState is never checked with ==, but always tested against bit masks,
for example:
-(void)documentStateChanged {
UIDocumentState state = _document.documentState;
[_statusView setDocumentState:state];
if (state & UIDocumentStateEditingDisabled) {
[_textView resignFirstResponder];
}
if (state & UIDocumentStateInConflict) {
[self showConflictButton];
}
else {
[self hideConflictButton];
[self dismissModalViewControllerAnimated:YES];
}
}

Related

Is CGAL 2D Regularized Boolean Set-Operations lib thread safe?

I am currently using the library mentioned in the title, see
CGAL 2D-reg-bool-set-op-pol
The library provides types for polygons and polygon sets which are internally represented as so called arrangements.
My question is: How far is this library thread safe, that is, fit for parallel computation on its objects?
There could be several levels in which thread safety is guaranteed:
1) If I take an object from a library like an arrangement
Polygon_set_2 S;
I might be able to execute
Polygon_2 P;
S.join(P);
and
Polygon_2 Q;
S.join(Q);
in two different concurrent execution units/threads in parallel without harm and get the right result, as if I had done everything sequentially. That would be the highest degree of thread safety/possible parallelism.
2) In fact for me a much lesser degree would be enough. In that case S and P would be members of a class C so that two class instances have different S and P instances. Then I would like to compute (say) S.join(P) in parallel for a list of instances of the class C, say, by calling a suitable member function of C with std::async
Just to be complete, I insert here a bit of actual code from my project which gives more flesh to these terse descriptions.
// the following typedefs are more or less standard from the
// CGAL library examples.
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel;
typedef Kernel::Point_2 Point_2;
typedef Kernel::Circle_2 Circle_2;
typedef Kernel::Line_2 Line_2;
typedef CGAL::Gps_circle_segment_traits_2<Kernel> Traits_2;
typedef CGAL::General_polygon_set_2<Traits_2> Polygon_set_2;
typedef Traits_2::General_polygon_2 Polygon_2;
typedef Traits_2::General_polygon_with_holes_2 Polygon_with_holes_2;
typedef Traits_2::Curve_2 Curve_2;
typedef Traits_2::X_monotone_curve_2 X_monotone_curve_2;
typedef Traits_2::Point_2 Point_2t;
typedef Traits_2::CoordNT coordnt;
typedef CGAL::Arrangement_2<Traits_2> Arrangement_2;
typedef Arrangement_2::Face_handle Face_handle;
// the following type is not copied from the CGAL library example code but
// introduced by me
typedef std::vector<Polygon_with_holes_2> pwh_vec_t;
// the following is an excerpt of my full GerberLayer class,
// that retains only data members which are used in the join()
// member function. These data is therefore local to the class instance.
class GerberLayer
{
public:
GerberLayer();
~GerberLayer();
void join();
pwh_vec_t raw_poly_lis;
pwh_vec_t joined_poly_lis;
Polygon_set_2 Saux;
annotate_vec_t annotate_lis;
polar_vec_t polar_lis;
};
//
// it is not necessary to understand the working of the function
// I deleted all debug and timing output etc. It is just to "showcase" some typical
// operations from the CGAL reg set boolean ops for polygons library from
// Efi Fogel et.al.
//
void GerberLayer::join()
{
Saux.clear();
auto it_annbase = annotate_lis.begin();
annotate_vec_t::iterator itann = annotate_lis.begin();
bool first_block = true;
int cnt = 0;
while (itann != annotate_lis.end()) {
gpolarity akt_polar = itann->polar;
auto itnext = std::find_if(itann, annotate_lis.end(),
[=](auto a) {return a.polar != akt_polar;});
Polygon_set_2 Sblock;
if (first_block) {
if (akt_polar == Dark) {
Saux.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
}
first_block = false;
} else {
if (akt_polar == Dark) {
Saux.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
} else {
Polygon_set_2 Saux1;
Saux1.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
Saux.complement();
pwh_vec_t auxlis;
Saux1.polygons_with_holes(std::back_inserter(auxlis));
Saux.join(auxlis.begin(), auxlis.end());
Saux.complement();
}
}
itann = itnext;
}
ende:
joined_poly_lis.clear();
annotate_lis.clear();
Saux.polygons_with_holes (std::back_inserter (joined_poly_lis));
}
int join_wrapper(GerberLayer* p_layer)
{
p_layer->join();
return 0;
}
// here the parallelism (of the "embarassing kind") occurs:
// for every GerberLayer a dedicated task is started, which calls
// the above GerberLayer::join() function
void Window::do_unify()
{
std::vector<std::future<int>> fivec;
for(int i = 0; i < gerber_layer_manager.num_layers(); ++i) {
GerberLayer* p_layer = gerber_layer_manager.at(i);
fivec.push_back(std::async(join_wrapper, p_layer));
}
int sz = wait_for_all(fivec); // written by me, not shown
}
One might think, that 2) must be possible trivially as only "different" instances of polygons and arrangements are in the play. But: It is imaginable, as the library works with arbitrary precision points (Point_2t in my code above) that, for some implementation reason or other, all the points are inserted in a list static to the class Point_2t, so that identical points are represented only once in this list. So there would be nothing like "independent instances of Point_2t" and as a consequence also not for "Polygon_2" or "Polygon_set_2" and one could say farewell to thread safety.
I tried to resolve this question by googling (not by analyzing the library code, I have to admit) and would hope for an authoritative answer (hopefully positive as this primitive parallelism would greatly speed up my code).
Addendum:
1)
I implemented this already and made a test run with nothing exceptional occurring and visually plausible results, but of course this proves nothing.
2) The same question for the CGAL 2D-Arrangement-package from the same authors.
Thanks in advance!
P.S.: I am using CGAL 4.7 from the packages supplied with Ubuntu 16.04 (Xenial). A newer version on Ubuntu 18.04 gave me errors so I decided to stay with 4.7. Should a version newer than 4.7 be thread-safe, but not 4.7, of course I will try to use that newer version.
Incidentally I could not find out if the libcgal***.so libraries as supplied by Ubuntu 16.04 are thread safe as described in the documentation. Especially I found no reference to the Macro-Variable CGAL_HAS_THREADS that is mentioned in the "thread-safety" part of the docs, when I looked through the build-logs of the Xenial cgal package on launchpad.
Indeed there are several level of thread safety.
The 2D Regularized Boolean operation package depends of the 2D Arrangement package, and both packages depend on a kernel. For most operations the EPEC kernel is required.
Both packages are thread-safe, except for the rational-arc traits (Arr_rational_function_traits_2).
However, the EPEC kernel is not thread-safe yet when sharing number-type objects among threads. So, if you, for example, construct different arrangements in different threads, from different input sets of curves, respectively, you are safe.

How to satisfy C6011 error with dynamically allocated array of structures

We have a heap-allocated array of custom structures that is pointed to by a local pointer. The pointer to the array is checked for nullptr. However, during my loop, VC++ complains that the first attempt to use an indexed entry in the array is "Dereferencing NULL pointer 'ppi'".
I'm having a dumb moment here I think, but there doesn't seem to be any way to satisfy the 6011 warning... how do I correct this scenario?
I have included some snippets of code to briefly illustrate the code in question.
// Previously, SystemInfoObject.PeripheralPortInfo is heap-alloc'd to contain
// multiple PeripheralInfo structures, and
// SystemInfoObject.PeripheralPortInfoCount is adjusted to the number
// of elements.
PeripheralInfo *ppi = nullptr;
ppi = SystemInfoObject.PeripheralPortInfo; // Set our local pointer
if (ppi != nullptr)
{
for (int i = 0; i < SystemInfoObject.PeripheralPortInfoCount; i++)
{
if (_tcsncmp(ppi[i].PortName, _T("\\\\"), 2) == 0) // C6011
{
// Some code
}
}
}
Visual Studio strikes again, I didn't see the loop condition that required certain code later on to change the pointer ppi and the null condition was never re-checked on subsequent loop iterations.
Wish I could delete my question! All set!

CUDD using not-gate

I am trying to build a BDD for monotone multiplication and need to use the negation of the input bits.
I am using the following code:
DdNode *x[N], *y[N], *nx[N], *ny[N];
gbm = Cudd_Init(0,0,CUDD_UNIQUE_SLOTS,CUDD_CACHE_SLOTS,0); /* Initialize a new BDD manager. */
for(k=0;k<N;k++)
{
x[k] = Cudd_bddNewVar(gbm);
nx[k] = Cudd_Not(x[k]);
y[k] = Cudd_bddNewVar(gbm);
ny[k] = Cudd_Not(y[k]);
}
The error that I am getting is:
cuddGarbageCollect: problem in table 0
dead count != deleted
This problem is often due to a missing call to Cudd_Ref
or to an extra call to Cudd_RecursiveDeref.
See the CUDD Programmer's Guide for additional details.Aborted (core dumped)
The multiplier compiles and runs fine when I am using
x[k] = Cudd_bddNewVar(gbm);
nx[k] = Cudd_bddNewVar(gbm);
y[k] = Cudd_bddNewVar(gbm);
ny[k] = Cudd_bddNewVar(gbm);
What should I do, the manual does not help not truing to ref x[k],nx[k]...
Every BDD node that is not referenced is subject to deletion by any Cudd operation. If you want to make sure that all nodes stored in your array remain valid, you need to Cudd_Ref them immediately after they are returned by CUDD. Hence, you need to correct your code to:
for(k=0;k<N;k++)
{
x[k] = Cudd_bddNewVar(gbm);
Cudd_Ref(x[k]);
nx[k] = Cudd_Not(x[k]);
Cudd_Ref(nx[k]);
y[k] = Cudd_bddNewVar(gbm);
Cudd_Ref(y[k]);
ny[k] = Cudd_Not(y[k]);
Cudd_Ref(yn[k]);
}
Before deallocating the Cudd manager, you then need to dereference the nodes:
for(k=0;k<N;k++)
{
Cudd_RecursiveDeref(gbm,x[k]);
Cudd_RecursiveDeref(gbm,nx[k]);
Cudd_RecursiveDeref(gbm,y[k]);
Cudd_RecursiveDeref(gbm,ny[k]);
}
Note that the fact that your code works when allocating more variables does not show that referencing is not needed. It may simply be that you do not ever use enough nodes for the garbage collector to trigger -- and before that, the problem is not detected.

lock-free bounded MPMC ringbuffer failure

I've been banging my head against (my attempt) at a lock-free multiple producer multiple consumer ring buffer. The basis of the idea is to use the innate overflow of unsigned char and unsigned short types, fix the element buffer to either of those types, and then you have a free loop back to beginning of the ring buffer.
The problem is - my solution doesn't work for multiple producers (it does though work for N consumers, and also single producer single consumer).
#include <atomic>
template<typename Element, typename Index = unsigned char> struct RingBuffer
{
std::atomic<Index> readIndex;
std::atomic<Index> writeIndex;
std::atomic<Index> scratchIndex;
Element elements[1 << (sizeof(Index) * 8)];
RingBuffer() :
readIndex(0),
writeIndex(0),
scratchIndex(0)
{
;
}
bool push(const Element & element)
{
while(true)
{
const Index currentReadIndex = readIndex.load();
Index currentWriteIndex = writeIndex.load();
const Index nextWriteIndex = currentWriteIndex + 1;
if(nextWriteIndex == currentReadIndex)
{
return false;
}
if(scratchIndex.compare_exchange_strong(
currentWriteIndex, nextWriteIndex))
{
elements[currentWriteIndex] = element;
writeIndex = nextWriteIndex;
return true;
}
}
}
bool pop(Element & element)
{
Index currentReadIndex = readIndex.load();
while(true)
{
const Index currentWriteIndex = writeIndex.load();
const Index nextReadIndex = currentReadIndex + 1;
if(currentReadIndex == currentWriteIndex)
{
return false;
}
element = elements[currentReadIndex];
if(readIndex.compare_exchange_strong(
currentReadIndex, nextReadIndex))
{
return true;
}
}
}
};
The main idea for writing was to use a temporary index 'scratchIndex' that acts a pseudo-lock to allow only one producer at any one time to copy-construct into the elements buffer, before updating the writeIndex and allowing any other producer to make progress. Before I am called heathen for implying my approach is 'lock-free' I realise that this approach isn't exactly lock-free, but in practice (if it would work!) it is significantly faster than having a normal mutex!
I am aware of a (more complex) MPMC ringbuffer solution here http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue, but I am really experimenting with my idea to then compare against that approach and find out where each excels (or indeed whether my approach just flat out fails!).
Things I have tried;
Using compare_exchange_weak
Using more precise std::memory_order's that match the behaviour I want
Adding cacheline pads between the various indices I have
Making elements std::atomic instead of just Element array
I am sure that this boils down to a fundamental segfault in my head as to how to use atomic accesses to get round using mutex's, and I would be entirely grateful to whoever can point out which neurons are drastically misfiring in my head! :)
This is a form of the A-B-A problem. A successful producer looks something like this:
load currentReadIndex
load currentWriteIndex
cmpxchg store scratchIndex = nextWriteIndex
store element
store writeIndex = nextWriteIndex
If a producer stalls for some reason between steps 2 and 3 for long enough, it is possible for the other producers to produce an entire queue's worth of data and wrap back around to the exact same index so that the compare-exchange in step 3 succeeds (because scratchIndex happens to be equal to currentWriteIndex again).
By itself, that isn't a problem. The stalled producer is perfectly within its rights to increment scratchIndex to lock the queue—even if a magical ABA-detecting cmpxchg rejected the store, the producer would simply try again, reload exactly the same currentWriteIndex, and proceed normally.
The actual problem is the nextWriteIndex == currentReadIndex check between steps 2 and 3. The queue is logically empty if currentReadIndex == currentWriteIndex, so this check exists to make sure that no producer gets so far ahead that it overwrites elements that no consumer has popped yet. It appears to be safe to do this check once at the top, because all the consumers should be "trapped" between the observed currentReadIndex and the observed currentWriteIndex.
Except that another producer can come along and bump up the writeIndex, which frees the consumer from its trap. If a producer stalls between steps 2 and 3, when it wakes up the stored value of readIndex could be absolutely anything.
Here's an example, starting with an empty queue, that shows the problem happening:
Producer A runs steps 1 and 2. Both loaded indices are 0. The queue is empty.
Producer B interrupts and produces an element.
Consumer pops an element. Both indices are 1.
Producer B produces 255 more elements. The write index wraps around to 0, the read index is still 1.
Producer A awakens from its slumber. It had previously loaded both read and write indices as 0 (empty queue!), so it attempts step 3. Because the other producer coincidentally paused on index 0, the compare-exchange succeeds, and the store progresses. At completion the producer lets writeIndex = 1, and now both stored indices are 1, and the queue is logically empty. A full queue's worth of elements will now be completely ignored.
(I should mention that the only reason I can get away with talking about "stalling" and "waking up" is that all the atomics used are sequentially consistent, so I can pretend that we're in a single-threaded environment.)
Note that the way that you are using scratchIndex to guard concurrent writes is essentially a lock; whoever successfully completes the cmpxchg gets total write access to the queue until it releases the lock. The simplest way to fix this failure is to just replace scratchIndex with a spinlock—it won't suffer from A-B-A and it's what's actually happening.
bool push(const Element & element)
{
while(true)
{
const Index currentReadIndex = readIndex.load();
Index currentWriteIndex = writeIndex.load();
const Index nextWriteIndex = currentWriteIndex + 1;
if(nextWriteIndex == currentReadIndex)
{
return false;
}
if(scratchIndex.compare_exchange_strong(
currentWriteIndex, nextWriteIndex))
{
elements[currentWriteIndex] = element;
// Problem here!
writeIndex = nextWriteIndex;
return true;
}
}
}
I've marked the problematic spot. Multiple threads can get to the writeIndex = nextWriteIndex at the same time. The data will be written in any order, although each write will be atomic.
This is a problem because you're trying to update two values using the same atomic condition, which is generally not possible. Assuming the rest of your method is fine, one way around this would be to combine both scratchIndex and writeIndex into a single value of double-size. For example, treating two uint32_t values as a single uint64_t value and operating atomically on that.

Atomic compare exchange. Is it correct?

I want to atomically add 1 to a counter under certain conditions, but I'm not sure if following is correct in a threaded environment:
void UpdateCounterAndLastSessionIfMoreThan60Seconds() const {
auto currentTime = timeProvider->GetCurrentTime();
auto currentLastSession = lastSession.load();
bool shouldIncrement = (currentTime - currentLastSession >= 1 * 60);
if (shouldIncrement) {
auto isUpdate = lastSession.compare_exchange_strong(currentLastSession, currentTime);
if (isUpdate)
changes.fetch_add(1);
}
}
private:
std::shared_ptr<Time> timeProvider;
mutable std::atomic<time_t> lastSession;
mutable std::atomic<uint32_t> changes;
I don't want to increment changes multiple times if 2 threads simultaneously evaluate to shouldIncrement = true and isUpdate = true also (only one should increment changes in that case)
I'm no C++ expert, but it looks to me like you've got a race condition between the evaluation of "isUpdate" and the call to "fetch_add(1)".
So I think the answer to your question "Is this thread safe?" is "No, it is not".
It is at least a bit iffy, as following scenario will show:
First thread 1 does these:
auto currentTime = timeProvider->GetCurrentTime();
auto currentLastSession = lastSession.load();
bool shouldIncrement = (currentTime - currentLastSession >= 1 * 60);
Then thread 2 does the same 3 statements, but so that currentTime is more than it just was for Thread 1.
Then thread 1 continues to update lastSession with it's time, which is less than thread 2's time.
Then thread 2 gets its turn, but fails to update lastSession, because thread 1 changed the value already.
So end result is, lastSession is outdated, because thread 2 failed to update it to the latest value. This might not matter in all cases, situation might be fixed very soon after, but it's one ugly corner which might break some assumptions somewhere, if not with current code then after some later changes.
Another thing to note is, lastSession and chnages are not atomically in sync. Other threads occasionally see changed lastSession with changes counter still not incremeted for that change. Again this is something which might not matter, but it's easy to forget something like this and accidentally code something which assumes they are in sync.
I'm not immediately sure if you can make this 100% safe with just atomics. Wrap it in a mutex instead.

Resources