My segmented picker has normal Int values as tags, How is this passed to and from CoreData? - core-data

My SwiftUI segmented control picker uses plain Int ".tag(1)" etc values for its selection.
CoreData only has Int16, Int32 & Int64 options to choose from, and with any of those options it seems my picker selection and CoreData refuse to talk to each other.
How is this (??simple??) task achieved please?
I've tried every numeric based option within CoreData including Int16-64, doubles and floats, all of them break my code or simply just don't work.
Picker(selection: $addDogVM.gender, label: Text("Gender?")) {
Text("Boy ♂").tag(1)
Text("?").tag(2)
Text("Girl ♀").tag(3)
}
I expected any of the 3 CoreData Int options to work out of the box, and to be compatible with the (standard) Int used by the picker.

Each element of a segmented control is represented by an index of type Int, and this index therefore commences at 0.
So using your example of a segmented control with three segments (for example: Boy ♂, ?, Girl ♀), each segment is represented by three indexes 0, 1 & 2.
If the user selects the segmented control that represents Girl ♀, then...
segmentedControl.selectedSegmentIndex = 2
When storing a value using Core Data framework, that is to be represented as a segmented control index in the UI, I therefore always commence with 0.
Everything you read from this point onwards is programmer preference - that is and to be clear - there are a number of ways to achieve the same outcome and you should choose one that best suits you and your coding style. Note also that this can be confusing for a newcomer, so I would encourage patience. My only advice, keep things as simple as possible until you've tested and debugged and tested enough to understand the differences.
So to continue:
The Apple Documentation states that...
...on 64-bit platforms, Int is the same size as Int64.
So in the Core Data model editor (.xcdatamodeld file), I choose to apply an Integer 64 attribute type for any value that will be used as an Int in my code.
Also, somewhere, some time ago, I read that if there is no reason to use Integer 16 or Integer 32, then default to the use of Integer 64 in object model graph. (I assume Integer 16 or Integer 32 are kept for backward compatibility.) If I find that reference I'll link it here.
I could write about the use of scalar attribute types here and manually writing your managed object subclass/es by selecting in the attribute inspector Class Codegen = Manual/None, but honestly I have decided such added detail will only complicate matters.
So your "automatically generated by Core Data" managed object subclass/es (NSManagedObject) will use the optional NSNumber? wrapper...
You will therefore need to convert your persisted/saved data in your code.
I do this in two places... when I access the data and when I persist the data.
(Noting I assume your entity is of type Dog and an instance exists of dog i.e. let dog = Dog())
// access
tempGender = dog.gender as? Int
// save
dog.gender = tempGender as NSNumber?
In between, I use a "temp" var property of type Int to work with the segmented control.
// temporary property to use with segmented control
private var tempGender: Int?
UPDATE
I do the last part a little differently now...
Rather than convert the data in code, I made a simple extension to my managed object subclass to execute the conversion. So rather than accessing the Core Data attribute directly and manipulating the data in code, now I instead use this convenience var.
extension Dog {
var genderAsInt: Int {
get {
guard let gender = self.gender else { return 0 }
return Int(truncating: gender)
}
set {
self.gender = NSNumber(value: newValue)
}
}
}
Your picker code...
Picker(selection: $addDogVM.genderAsInt, label: Text("Gender?")) {
Text("Boy ♂").tag(0)
Text("?").tag(1)
Text("Girl ♀").tag(2)
}
Any questions, ask in the comments.

Related

General recommendations and tricks of how to instantiate fields and call methods

I want to instantiate a large number of StringProperty fields to put text values
(>100000). All in all my code performs well so far. I'm still trying to optimize my code as well as possible to harness the full power capabilities of my weak CPU (Intel Atom N2600, 1,6GHz, 2GByte Ram).
I'm calling the following method 100000 times and it takes some seconds
until all values are stored in my array of StringProperty.
public setData(int row, int numberOfCols, String data [][]) {
this.dataValue = new StringProperty[numberOfCols];
for(int i=0;i<numberOfCols;i++) dataValue[i] = new SimpleStringProperty(data[row][i]);
}
Is the method above good enough for intantiating fields and putting values?
Any alternative ideas of how to tweak the method above?

Setting a df threshold, beyond which, query terms should be ignored

I am using Solr to search and index products from a database. Products have two interesting fields : a name and a description. Product names are normally unique, but sometimes contain common words, which serve as a pre-description of the product. One example would be "UltraScrew - a motor powered screwdriver”. Names are generally much shorter than descriptions
The problem is that when one searches for a common term, documents that contain it in the name get an unwanted boost, over those that contain it only in the description. This is due to the fact that names are shorter, and even with the normalization added afterwards, it is quite visible.
I was wondering if it is possible to filter terms out of the name, not with a dictionary of stop words, but based on the relative document frequency of the term. That means, if a term appears in more than 10% of the available documents, it should be ignored when the name field is queried. The description field should be left untouched.
Is this generally possible?
maybe you could use your own similarity:
import org.apache.lucene.search.Similarity;
public class MySimilarity extends Similarity {
#Override
public float idf(int docFreq, int numDocs) {
float freq = ((float)docFreq)/((float)numDocs);
if (freq >=0.1) return 0;
return (float) (Math.log(numDocs / (double) (docFreq + 1)) + 1.0);
}
...
}
and use that one instead of the default one.
You can set the similarity for an indexSearcher at lucene level, see this other answer to a question.
I am not sure if I understood the question correctly, but you could run two separate queries. Pseudo code:
SearchResults nameSearchResults = search("name:X");
if (nameSearchResults.size() * 10 >= corpusSize) { // name-based search useless?
return search("description:X"); // use description-based search
} else {
return search("name:X description:X); // search both fields
}

Is it possible to do data type conversion on SQLBulkUpload from IDataReader?

I need to grab a large amount of data from one set of tables and SQLBulkInsert into another set...unfortunately the source tables are ALL varchar(max) and I would like the destination to be the correct type. Some tables are in the millions of rows...and (for far too pointless policital reasons to go into) we can't use SSIS.
On top of that, some "bool" values are stored as "Y/N", some "0/1", some "T/F" some "true/false" and finally some "on/off".
Is there a way to overload IDataReader to perform type conversion? Would need to be on a per-column basis I guess?
An alternative (and might be the best solution) is to put a mapper in place (perhaps AutoMapper or custom) and use EF to load from one object and map into the other? This would provoide a lot of control but also require a lot of boilerplate code for every property :(
In the end I wrote a base wrapper class to hold the SQLDataReader, and implementing the IDataReader methods just to call the SQLDataReader method.
Then inherit from the base class and override GetValue on a per-case basis, looking for the column names that need translating:
public override object GetValue(int i)
{
var landingColumn = GetName(i);
string landingValue = base.GetValue(i).ToString();
object stagingValue = null;
switch (landingColumn)
{
case "D4DTE": stagingValue = landingValue.FromStringDate(); break;
case "D4BRAR": stagingValue = landingValue.ToDecimal(); break;
default:
stagingValue = landingValue;
break;
}
return stagingValue;
}
Works well, is extensible, and very fast thanks to SQLBulkUpload. OK, so there's a small maintenance overhead, but since the source columns will very rarely change, this doesn't really affect anything.

iOS 5 + GLKView: How to access pixel RGB data for colour-based vertex picking?

I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

best practices with code or lookup tables

[UPDATE] Chosen approach is below, as a response to this question
Hi,
I' ve been looking around in this subject but I can't really find what I'm looking for...
With Code tables I mean: stuff like 'maritial status', gender, specific legal or social states... More specifically, these types have only set properties and the items are not about to change soon (but could). Properties being an Id, a name and a description.
I'm wondering how to handle these best in the following technologies:
in the database (multiple tables, one table with different code-keys...?)
creating the classes (probably something like inheriting ICode with ICode.Name and ICode.Description)
creating the view/presenter for this: there should be a screen containing all of them, so a list of the types (gender, maritial status ...), and then a list of values for that type with a name & description for each item in the value-list.
These are things that appear in every single project, so there must be some best practice on how to handle these...
For the record, I'm not really fond of using enums for these situations... Any arguments on using them here are welcome too.
[FOLLOW UP]
Ok, I've gotten a nice answer by CodeToGlory and Ahsteele. Let's refine this question.
Say we're not talking about gender or maritial status, wich values will definately not change, but about "stuff" that have a Name and a Description, but nothing more. For example: Social statuses, Legal statuses.
UI:
I want only one screen for this. Listbox with possibe NameAndDescription Types (I'll just call them that), listbox with possible values for the selected NameAndDescription Type, and then a Name and Description field for the selected NameAndDescription Type Item.
How could this be handled in View & Presenters? I find the difficulty here that the NameAndDescription Types would then need to be extracted from the Class Name?
DB:
What are pro/cons for multiple vs single lookup tables?
Using database driven code tables can very useful. You can do things like define the life of the data (using begin and end dates), add data to the table in real time so you don't have to deploy code, and you can allow users (with the right privileges of course) add data through admin screens.
I would recommend always using an autonumber primary key rather than the code or description. This allows for you to use multiple codes (of the same name but different descriptions) over different periods of time. Plus most DBAs (in my experience) rather use the autonumber over text based primary keys.
I would use a single table per coded list. You can put multiple codes all into one table that don't relate (using a matrix of sorts) but that gets messy and I have only found a couple situations where it was even useful.
Couple of things here:
Use Enumerations that are explicitly clear and will not change. For example, MaritalStatus, Gender etc.
Use lookup tables for items that are not fixed as above and may change, increase/decrease over time.
It is very typical to have lookup tables in the database. Define a key/value object in your business tier that can work with your view/presentation.
I have decided to go with this approach:
CodeKeyManager mgr = new CodeKeyManager();
CodeKey maritalStatuses = mgr.ReadByCodeName(Code.MaritalStatus);
Where:
CodeKeyManager can retrieve CodeKeys from DB (CodeKey=MaritalStatus)
Code is a class filled with constants, returning strings so Code.MaritalStatus = "maritalStatus". These constants map to to the CodeKey table > CodeKeyName
In the database, I have 2 tables:
CodeKey with Id, CodeKeyName
CodeValue with CodeKeyId, ValueName, ValueDescription
DB:
alt text http://lh3.ggpht.com/_cNmigBr3EkA/SeZnmHcgHZI/AAAAAAAAAFU/2OTzmtMNqFw/codetables_1.JPG
Class Code:
public class Code
{
public const string Gender = "gender";
public const string MaritalStatus = "maritalStatus";
}
Class CodeKey:
public class CodeKey
{
public Guid Id { get; set; }
public string CodeName { get; set; }
public IList<CodeValue> CodeValues { get; set; }
}
Class CodeValue:
public class CodeValue
{
public Guid Id { get; set; }
public CodeKey Code { get; set; }
public string Name { get; set; }
public string Description { get; set; }
}
I find by far the easiest and most efficent way:
All code-data can be displayed in a identical manner (in the same view/presenter)
I don't need to create tables and classes for every code table that's to come
But I can still get them out of the database easily and use them easily with the CodeKey constants...
NHibernate can handle this easily too
The only thing I'm still considering is throwing out the GUID Id's and using string (nchar) codes for usability in the business logic.
Thanks for the answers! If there are any remarks on this approach, please do!
I lean towards using a table representation for this type of data. Ultimately if you have a need to capture the data you'll have a need to store it. For reporting purposes it is better to have a place you can draw that data from via a key. For normalization purposes I find single purpose lookup tables to be easier than a multi-purpose lookup tables.
That said enumerations work pretty well for things that will not change like gender etc.
Why does everyone want to complicate code tables? Yes there are lots of them, but they are simple, so keep them that way. Just treat them like ever other object. Thy are part of the domain, so model them as part of the domain, nothing special. If you don't when they inevitibly need more attributes or functionality, you will have to undo all your code that currently uses it and rework it.
One table per of course (for referential integrity and so that they are available for reporting).
For the classes, again one per of course because if I write a method to recieve a "Gender" object, I don't want to be able to accidentally pass it a "MarritalStatus"! Let the compile help you weed out runtime error, that's why its there. Each class can simply inherit or contain a CodeTable class or whatever but that's simply an implementation helper.
For the UI, if it does in fact use the inherited CodeTable, I suppose you could use that to help you out and just maintain it in one UI.
As a rule, don't mess up the database model, don't mess up the business model, but it you wnt to screw around a bit in the UI model, that's not so bad.
I'd like to consider simplifying this approach even more. Instead of 3 tables defining codes (Code, CodeKey and CodeValue) how about just one table which contains both the code types and the code values? After all the code types are just another list of codes.
Perhaps a table definition like this:
CREATE TABLE [dbo].[Code](
[CodeType] [int] NOT NULL,
[Code] [int] NOT NULL,
[CodeDescription] [nvarchar](40) NOT NULL,
[CodeAbreviation] [nvarchar](10) NULL,
[DateEffective] [datetime] NULL,
[DateExpired] [datetime] NULL,
CONSTRAINT [PK_Code] PRIMARY KEY CLUSTERED
(
[CodeType] ASC,
[Code] ASC
)
GO
There could be a root record with CodeType=0, Code=0 which represents the type for CodeType. All of the CodeType records will have a CodeType=0 and a Code>=1. Here is some sample data that might help clarify things:
SELECT CodeType, Code, Description FROM Code
Results:
CodeType Code Description
-------- ---- -----------
0 0 Type
0 1 Gender
0 2 Hair Color
1 1 Male
1 2 Female
2 1 Blonde
2 2 Brunette
2 3 Redhead
A check constraint could be added to the Code table to ensure that a valid CodeType is entered into the table:
ALTER TABLE [dbo].[Code] WITH CHECK ADD CONSTRAINT [CK_Code_CodeType]
CHECK (([dbo].[IsValidCodeType]([CodeType])=(1)))
GO
The function IsValidCodeType could be defined like this:
CREATE FUNCTION [dbo].[IsValidCodeType]
(
#Code INT
)
RETURNS BIT
AS
BEGIN
DECLARE #Result BIT
IF EXISTS(SELECT * FROM dbo.Code WHERE CodeType = 0 AND Code = #Code)
SET #Result = 1
ELSE
SET #Result = 0
RETURN #Result
END
GO
One issue that has been raised is how to ensure that a table with a code column has a proper value for that code type. This too could be enforced by a check constraint using a function.
Here is a Person table which has a gender column. It could be a best practice to name all code columns with the description of the code type (Gender in this example) followed by the word Code:
CREATE TABLE [dbo].[Person](
[PersonID] [int] IDENTITY(1,1) NOT NULL,
[LastName] [nvarchar](40) NULL,
[FirstName] [nvarchar](40) NULL,
[GenderCode] [int] NULL,
CONSTRAINT [PK_Person] PRIMARY KEY CLUSTERED ([PersonID] ASC)
GO
ALTER TABLE [dbo].[Person] WITH CHECK ADD CONSTRAINT [CK_Person_GenderCode]
CHECK (([dbo].[IsValidCode]('Gender',[Gendercode])=(1)))
GO
IsValidCode could be defined this way:
CREATE FUNCTION [dbo].[IsValidCode]
(
#CodeTypeDescription NVARCHAR(40),
#Code INT
)
RETURNS BIT
AS
BEGIN
DECLARE #CodeType INT
DECLARE #Result BIT
SELECT #CodeType = Code
FROM dbo.Code
WHERE CodeType = 0 AND CodeDescription = #CodeTypeDescription
IF (#CodeType IS NULL)
BEGIN
SET #Result = 0
END
ELSE
BEGiN
IF EXISTS(SELECT * FROM dbo.Code WHERE CodeType = #CodeType AND Code = #Code)
SET #Result = 1
ELSE
SET #Result = 0
END
RETURN #Result
END
GO
Another function could be created to provide the code description when querying a table that has a code column. Here is an
example of querying the Person table:
SELECT PersonID,
LastName,
FirstName,
GetCodeDescription('Gender',GenderCode) AS Gender
FROM Person
This was all conceived from the perspective of preventing the proliferation of lookup tables in the database and providing one lookup table. I have no idea whether this design would perform well in practice.

Resources