SwiftUI + Core Data crash when using `.allObjects` of a "To Many" relationship entry - core-data

I am constantly getting crash reports with a stack trace like this:
0 libobjc.A.dylib 0x0000000199e6a334 object_getMethodImplementation + 48 (objc-object.h:97)
1 CoreFoundation 0x00000001853d35a4 _NSIsNSSet + 40 (NSObject.m:381)
2 CoreFoundation 0x00000001852a6888 -[NSMutableSet unionSet:] + 108 (NSSet_Internal.h:56)
3 CoreData 0x000000018b4af3b0 -[_NSFaultingMutableSet willReadWithContents:] + 636 (_NSFaultingMutableSet.m:167)
4 CoreData 0x000000018b53c3a0 -[_NSFaultingMutableSet allObjects] + 32 (_NSFaultingMutableSet.m:340)
My code is doing the following, this is inside the Core Data auto generated class (I am using the code bellow inside an extension I have made to that class):
if let tasks = tasks?.allObjects as? [Task] {
}
where tasks are #NSManaged public var tasks: NSSet?, this is the "array" object from core data (the auto generated one).
Any ideas what is wrong here, maybe it is a Core Data issue itself, SwiftUI + Core Data for some reason.

This is the way I do it, as a computed property in the class extension, and never had issues.
var allTasks: [Task] {
tasks?.allObjects as? [Task] ?? [ ]
}
you could add the if let inside and do any sorting or filtering and return that.

With the latest SwiftUI 3 I have stopped seeing this crash.
Initially this is exactly what I was thinking the source of the problem is (internal SwiftUI issue) and it is proving to be just that.

Related

Understanding map eviction algorithm in hazelcast

I'm using hazelcast imdg 3.12.6 (yes, its too old i know) in my current projects. I try to figure out how map eviction algorithm exactly works.
First of all i read 2 things:
https://docs.hazelcast.org/docs/3.12.6/manual/html-single/index.html#map-eviction
https://docs.hazelcast.org/docs/3.12.6/manual/html-single/index.html#eviction-algorithm
I found a confusing thing about map eviction. So let's describe.
There is a class com.hazelcast.map.impl.eviction.EvictionChecker, containing method public boolean checkEvictable. This method checks if recordStore is evictable based on max size policy:
switch (maxSizePolicy) {
case PER_NODE:
return recordStore.size() > toPerPartitionMaxSize(maxConfiguredSize, mapName);
case PER_PARTITION:
return recordStore.size() > maxConfiguredSize;
//other cases...
I found confusing that per_node policy checks toPerPartitionMaxSize, and per_partition policy checks maxConfiguredSize.
It seems to be vice versa i consider.
If we take a look deeply into the history, how EvictionChecker was changed, we could find interesting thing.
Let's see git blame. This class has been changed 2 times:
7 years ago
4 years ago
7 years ago
4 years ago
I consider that per_node and per_partition conditions should be changed vice versa.
Can you please explain and confirm that com.hazelcast.map.impl.eviction.EvictionChecker#checkEvictable behaves clearly when PER_NODE is used.
UPD
I've done some local tests. My configuration is:
<map name="ReportsPerNode">
<eviction-policy>LRU</eviction-policy>
<max-size policy="PER_NODE">500</max-size>
</map>
I tried to put 151 elements into map. Map contains 112 elements, 39 elements are evicted by max size policy as a result. Calculation shows translatedPartitionSize = maxConfiguredSize * memberCount / partitionCount = 500 * 1 / 271 = 1,84.
If policy PER_PARTITION is used, my test completes well.
I analyzed how data is distributed over RecordStore. Every RecordStore contains from 1 to 4 elements.
According to formula maxConfiguredSize * memberCount / partitionCount, it means maxConfiguredSize should be 1084 elements. 1084 * 1 / 271 = 4.
As a result i have 2 configurations:
PER_NODE works well when max-size = 1084. Map contains 151 element as expected.
PER_PARTITION works well when max-size = 500; It also works well when max-size = 271. Map contains 151 element as expected.
It seems that data distribution over RecordStore strongly depends on key hash. Why we can't put 1 element per partition, if there are 271 partitions by default?
Its also unclear, why I should have map capacity 1084 to put only 151 element?

Why does my code does not update the number on the screen

what the function does is deviding the numertor with the denomirator and updates the app's text view accordingly after every second, the problem is that it doesn't update the screen its just simply shows the original number of the numerator that is 60.
what do I change in order to make this work?
fun division() {
val numerator = 60
var denominator = 4
repeat(4) {
Thread.sleep(1_000)
findViewById<TextView>(R.id.division_textview).setText("${numerator / denominator}")
denominator--
}
}
Because you are setting (basically overwritting) the text everytime it loops through, you will only see the value of the last increment which would be 60/1 and that's why you are only seeing 60 value. Try like this:
fun division() {
val numerator = 60
var denominator = 4
repeat(4) {
Thread.sleep(1_000)
findViewById<TextView>(R.id.division_textview).append("${numerator / denominator}\n")
denominator--
}
}
setText() was overwriting the text with the new one but append() is gonna keep the previous text.
This is that dang Codelab again isn't it? I knew it looked familiar... I already answered a similar question here - but basically, when you run division on the main thread (which you must be since you're messing with UI components), you're freezing the app because you're blocking the thread with Thread.sleep
The display can't actually update until your code has finished running, i.e. after you exit the division function, because it's all running on the same thread, and the display update pass comes later. So this is what's actually happening:
freeze the app for 1 second
set the text as the result of 60 / 4 - it won't actually redraw until later, after your code has finished, so there's no visual change
freeze the app for 1 second
set the text as the result of 60 / 3 - again you won't see anything happen yet, but now it's going to show 60 / 3 instead of 60 / 4, because you just updated the state of that TextView
etc.
The last text you set is the result of 60 / 1, and then your code finishes, so the system can finally get around to updating the display. So the first thing you see after the app stops freezing is 60 - it's not just the numerator, it's the last calculation from your loop.
If you want something to update while the app is running, there are lots of solutions like coroutines, CountdownTimers, posting runnables that execute at a specific time, etc. The answer I linked shows how to create a separate thread to run basically the same code on, so you can block it as much as you like without affecting the running of the app. The one thing you don't do is block the main thread like that Codelab example does. It's a bad Codelab
You can use delay and then call from a coroutine:
private suspend fun division() {
val numerator = 60
var denominator = 4
repeat(4) {
delay(1000)
findViewById<TextView>(R.id.division_textview).text = "${numerator / denominator}"
denominator--
}
}
Then from your Activity/Fragment:
lifecycleScope.launch {
division()
}

My segmented picker has normal Int values as tags, How is this passed to and from CoreData?

My SwiftUI segmented control picker uses plain Int ".tag(1)" etc values for its selection.
CoreData only has Int16, Int32 & Int64 options to choose from, and with any of those options it seems my picker selection and CoreData refuse to talk to each other.
How is this (??simple??) task achieved please?
I've tried every numeric based option within CoreData including Int16-64, doubles and floats, all of them break my code or simply just don't work.
Picker(selection: $addDogVM.gender, label: Text("Gender?")) {
Text("Boy ♂").tag(1)
Text("?").tag(2)
Text("Girl ♀").tag(3)
}
I expected any of the 3 CoreData Int options to work out of the box, and to be compatible with the (standard) Int used by the picker.
Each element of a segmented control is represented by an index of type Int, and this index therefore commences at 0.
So using your example of a segmented control with three segments (for example: Boy ♂, ?, Girl ♀), each segment is represented by three indexes 0, 1 & 2.
If the user selects the segmented control that represents Girl ♀, then...
segmentedControl.selectedSegmentIndex = 2
When storing a value using Core Data framework, that is to be represented as a segmented control index in the UI, I therefore always commence with 0.
Everything you read from this point onwards is programmer preference - that is and to be clear - there are a number of ways to achieve the same outcome and you should choose one that best suits you and your coding style. Note also that this can be confusing for a newcomer, so I would encourage patience. My only advice, keep things as simple as possible until you've tested and debugged and tested enough to understand the differences.
So to continue:
The Apple Documentation states that...
...on 64-bit platforms, Int is the same size as Int64.
So in the Core Data model editor (.xcdatamodeld file), I choose to apply an Integer 64 attribute type for any value that will be used as an Int in my code.
Also, somewhere, some time ago, I read that if there is no reason to use Integer 16 or Integer 32, then default to the use of Integer 64 in object model graph. (I assume Integer 16 or Integer 32 are kept for backward compatibility.) If I find that reference I'll link it here.
I could write about the use of scalar attribute types here and manually writing your managed object subclass/es by selecting in the attribute inspector Class Codegen = Manual/None, but honestly I have decided such added detail will only complicate matters.
So your "automatically generated by Core Data" managed object subclass/es (NSManagedObject) will use the optional NSNumber? wrapper...
You will therefore need to convert your persisted/saved data in your code.
I do this in two places... when I access the data and when I persist the data.
(Noting I assume your entity is of type Dog and an instance exists of dog i.e. let dog = Dog())
// access
tempGender = dog.gender as? Int
// save
dog.gender = tempGender as NSNumber?
In between, I use a "temp" var property of type Int to work with the segmented control.
// temporary property to use with segmented control
private var tempGender: Int?
UPDATE
I do the last part a little differently now...
Rather than convert the data in code, I made a simple extension to my managed object subclass to execute the conversion. So rather than accessing the Core Data attribute directly and manipulating the data in code, now I instead use this convenience var.
extension Dog {
var genderAsInt: Int {
get {
guard let gender = self.gender else { return 0 }
return Int(truncating: gender)
}
set {
self.gender = NSNumber(value: newValue)
}
}
}
Your picker code...
Picker(selection: $addDogVM.genderAsInt, label: Text("Gender?")) {
Text("Boy ♂").tag(0)
Text("?").tag(1)
Text("Girl ♀").tag(2)
}
Any questions, ask in the comments.

Azure StorageException when using emulated storage (within documented constraints)

Our application performs several batches of TableBatchOperation. We ensure that each of these table batch operations has
100 or fewer table operations
table operations for one entity partition key only
Along the lines of the following:
foreach (var batch in batches)
{
var operation = new TableBatchOperation();
operation.AddRange(batch.Select(x => TableOperation.InsertOrReplace(x)));
await table.ExecuteBatchAsync(operation);
}
When we use emulated storage we 're hitting a Microsoft.WindowsAzure.Storage.StorageException - "Element 99 in the batch returned an unexpected response code."
When we use production Azure, everything works fine.
Emulated storage is configured as follows:
<add key="StorageConnectionString" value="UseDevelopmentStorage=true;" />
I'm concerned that although everything is working OK in production (where we use real Azure), the fact that it's blowing up with emulated storage may be symptomatic of us doing something we shouldn't be.
I've run it with a debugger (before it blows up) and verified that (as per API):
The entire operation is only only 492093 characters when serialized to JSON (984186 bytes as UTF-16)
There are exactly 100 operations
All entities have the same partition key
See https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.table.tablebatchoperation?view=azurestorage-8.1.3
EDIT:
It looks like one of the items (#71/100) is causing this to fail. Structurally it is no different to the other items, however it does have some rather long string properties - so perhaps there is an undocumented limitation / bug?
EDIT:
The following sequence of Unicode UTF-16 bytes (on a string property) is sufficent to cause the exception:
r e n U+0019 space
114 0 101 0 110 0 25 0 115 0 32 0
(it's the bytes 25 0 115 0 i.e. unicode end-of-medium U+0019 which is causing the exception).
EDIT:
Complete example of failing entity:
JSON:
{"SomeProperty":"ren\u0019s ","PartitionKey":"SomePartitionKey","RowKey":"SomeRowKey","Timestamp":"0001-01-01T00:00:00+00:00","ETag":null}
Entity class:
public class TestEntity : TableEntity
{
public string SomeProperty { get; set; }
}
Entity object construction:
var entity = new TestEntity
{
SomeProperty = Encoding.Unicode.GetString(new byte[]
{114, 0, 101, 0, 110, 0, 25, 0, 115, 0, 32, 0}),
PartitionKey = "SomePartitionKey",
RowKey = "SomeRowKey"
};
According to your description, I also can reproduce the issue that you mentioned. After I tested I found that the special
Unicode Character 'END OF MEDIUM' (U+0019) seems that not supported by Azure Storage Emulator. If replace to other unicode is possible, please try to use another unicode to instead of it.
we also could give our feedback to Azure storage team.

Cairo.Surface is leaking... How to debug it with Monodevelop?

I have many doubts related with Cairo and GTK# (that runs on .NET and Mono). I'm developing a GTK# application for MS Windows and Linux. I'm using GTK# 2.12 over .NET right now while I'm working on the application.
I've created a custom widget that uses Cairo.ImageSurface and Cairo.Context objects. As far as I know, I'm calling the Dispose method of every ImageSurface object and every Context object I create inside the widget code.
The widget responds to the "MouseOver" event, redrawing some parts of its DrawingArea.
The (first) problem:
almost every redrawing operation increases a little bit the amount of used memory. When the amount of used memory has increased 3 or 4 Kbytes the Monodevelop tracelog panel shows me the following message:
Cairo.Surface is leaking, programmer is missing a call to Dispose
Set MONO_CAIRO_DEBUG_DISPOSE to track allocation traces
The code that redraws a part of the widget is something like:
// SRGB is a custom struct, not from Gdk nor Cairo
void paintSingleBlock(SRGB color, int i)
{
using (Cairo.Context g = CairoHelper.Create (GdkWindow)) {
paintSingleBlock (g, color, i);
// We do this to avoid memory leaks. Cairo does not work well with the GC.
g.GetTarget().Dispose ();
g.Dispose ();
}
}
void paintSingleBlock(Cairo.Context g, SRGB color, int i)
{
var scale = Math.Pow (10.0, TimeScale);
g.Save();
g.Rectangle (x(i), y(i), w(i), h(i));
g.ClosePath ();
g.Restore ();
// We don't directly use stb.Color because in some cases we need more flexibility
g.SetSourceRGB (color.R, color.G, color.B);
g.LineWidth = 0;
g.Fill ();
}
The (second) problem: Ok, Monodevelop tells me that I should set MONO_CAIRO_DEBUG_DISPOSE to "track allocation traces" (In order to find the leak, I suppose)... but I don't know how to set this environment variable (I'm in Windows). I've tried using bash and executing something like:
MONO_CAIRO_DEBUG_DISPOSE=1 ./LightCreator.exe
But nothing appears in stderr nor stdout... (neither the messages that appear in the Monodevelop's applicationt trace panel). I also don't know how to get the debugging messages that see inside Monodevelop but without Monodevelop.
There's anyone with experience debugging GTK# or Cairo# memory leaks?
Thanks in advance.
Just wanted to throw my 2c here as I was fighting a similar leak problem in Cairo with surfaces. What I noticed is that if I create a Surface object the ReferenceCount property becomes 1 and if I attach this surface to a Context if becomes not 2 but 3. After disposing the Context the ReferenceCount comes back but to 2.
So I used some reflection to call the native methods in Cairo to decrease the ReferenceCount when I really want to Dispose a surface. I use this code:
public static void HardDisposeSurface (this Surface surface)
{
var handle = surface.Handle;
long refCount = surface.ReferenceCount;
surface.Dispose ();
refCount--;
if (refCount <= 0)
return;
var asm = typeof (Surface).Assembly;
var nativeMethods = asm.GetType ("Cairo.NativeMethods");
var surfaceDestroy = nativeMethods.GetMethod ("cairo_surface_destroy", BindingFlags.Static | BindingFlags.NonPublic);
for (long i = refCount; i > 0; i--)
surfaceDestroy.Invoke (null, new object [] { handle });
}
After using it I still have some leaks, but they seem to be related to other parts of Cairo and not with the surfaces.
I have found that a context created with CairoHelper.Create() will have a reference count of two.
A call to dispose reduces the reference count by one. Thus the context is never freed and keeps its target alive, too.
The native objects have manual reference counting, but the Gtk# wrappers want to keep a native object alive as long as there is a C# instance referencing it.
If a native object is created for a C# wrapper instance it does not need to increment the reference count because the wrapper instance 'owns' the native object and the reference count has the correct value of one. But if a wrapper instance is created for an already existing native object the reference count of the native object needs to be manually incremented to keep the object alive.
This is decided by a bool parameter when a wrapper instance is created.
Looking at the code for CairoHelper.Create() will show something like this
public static Cairo.Context Create(Gdk.Window window) {
IntPtr raw_ret = gdk_cairo_create(window == null ? IntPtr.Zero : window.Handle);
Cairo.Context ret = new Cairo.Context (raw_ret, false);
return ret;
}
Even though the native context was just created 'owned' will be false and the C# context will increment the reference count.
There is no fixed version right now, it can only be corrected by fixing the source and building Gtk# yourself.
CairoHelper is an auto-generated file, to change the parameter to true this attribute must be included in gdk/Gdk.metadata.
<attr path="/api/namespace/class[#cname='GdkCairo_']/method[#name='Create']/return-type" name="owned">true</attr>
Everything to build Gtk# can be found here.
https://github.com/mono/gtk-sharp

Resources