How to set the RKEntityMapping.identificationAttributes in RestKit case-insensitive? - core-data

The given identificationAttributes may in different cases in local and server. Can I set the identificationAttributes case-insensitive in order to update data immediately.

They aren't really unique identification attributes if they aren't unique or don't match, so ideally you should correct the server.
Changing the caching system, while possible by subclassing, isn't trivial. So, I'd look at using key paths to run uppercaseString on the value in your identification attribute during mapping.

Related

How to use UUIDs in Neo4j, to keep pointers to nodes elsewhere?

I figured out thanks to some other questions that Neo4j makes use of ids for its nodes that could get recycled in case of node deletion.
That's a real concern for me as I need to store a reference to my node in another database (relational this time) in order to keep some sort of "pinned" nodes.
I've tried using this https://github.com/graphaware/neo4j-uuid to generate them automatically, but I did not succeed, all my queries kept running indefinitely.
My new idea is to make a new field in each of my nodes that I would manually fill with a UUID generated by NodeJs package uuid through uuid.v4().
I also came across the concept of indexing multiple times, which is totally unclear to me, but it seems that I should run this query:
CREATE INDEX ON :MyNodeLabel(myUUIDField)
If you think that it doesn't make sense at all don't hesitate to come up with another proposition. I am open to all kinds of suggestions.
Thanks for your help.
I would consider using the APOC library's apoc.uuid.install procedure.
Definitely create a unique constraint on the label and attribute you are going to use. This will not only create an index but also guarantee uniqueness of the attribute in the label namespace.
CREATE CONSTRAINT ON (mynode:MyNodeLabel) ASSERT mynode.myUUIDField IS UNIQUE
Then call the apoc.uuid.install procedure. This will create uuid's in the attribute myUUIDField on all of the existing MyNodeLabel nodes and on any new ones.
CALL apoc.uuid.install('MyNodeLabel', {addToExistingNodes: true, uuidProperty: 'myUUIDField'}) yield label, installed, properties
NOTE: you will have to install APOC and set apoc.uuid.enabled=true n the neo4j.conf file.

How to set a field containing unique key

I want to save data in CouchDB documents and as I am used to doing it in RDBMS. I want to create a field which can only contain a unique value in the database. If I now save a document and there is already a document with unique key I expect an error from CouchDB.
I guess I can use the document ID and replace the auto generated doc-id by my value, but is there a way to set other field as unique key holder. Any best practice regarding unique keys?
As you said, the generated _id is enforced as unique. That is the only real unique constraint in CouchDB, and some people use it as such for their own applications.
However, this only applies to a single CouchDB instance. Once you start introducing replication and other instances, you can run into conflicts if the same _id is generated on more than 1 node. (depending on how you generate your _ids, this may or may not be a problem)
As Dominic said, the _id is the only parameter that is almost assured to be unique. One thing that is sure is that you have to design your "database" in a different way. Keep in mind that the _id will be database wide. You will be able to have only one document with this _id.
The _id must be a string, which means you can't have an array or a number or anything else.
If you want to make the access public, you'll have to think about how to generate your id in a way that it won't mess with your system.
I came up with ids that looked like that:
"email:email#example.com"
It worked well in my particular case to prevent people from creating multiple auth on the same email. But as Documinic said, if you have multiple masters, you'll have to think about possible conflicts.

In co-located caching, how is determined the location of a key?

This a design question about the Caching component, I can see two approaches in determining where is the data:
Each role instance maintains a table containing the entire set of keys, tracking the corresponding instance holding the data.
The location of the data is determined by the hash code of the key.
In the first case, it would mean that it's important to keep a reasonable set of keys.
In the second case, that testing the existence of a key would generate a network round trip...
My guess is 2), it utilize hash to determine the location, maybe consistent hashing.
And I think yes, testing the existence of a key would generate a network I/O, but I don't think it needs to call all you co-location server since from the hash it should know which server contains your data and just need to connect to it.

Are MongoDB ids guessable?

If you bind an api call to the object's id, could one simply brute force this api to get all objects? If you think of MySQL, this would be totally possible with incremental integer ids. But what about MongoDB? Are the ids guessable? For example, if you know one id, is it easy to guess other (next, previous) ids?
Thanks!
Update Jan 2019: As mentioned in the comments, the information below is true up until version 3.2. Version 3.4+ changed the spec so that machine ID and process ID were merged into a single random 5 byte value instead. That might make it harder to figure out where a document came from, but it also simplifies the generation and reduces the likelihood of collisions.
Original Answer:
+1 for Sergio's answer, in terms of answering whether they could be guessed or not, they are not hashes, they are predictable, so they can be "brute forced" given enough time. The likelihood depends on how the ObjectIDs were generated and how you go about guessing. To explain, first, read the spec here:
Object ID Spec
Let us then break it down piece by piece:
TimeStamp - completely predictable as long as you have a general idea of when the data was generated
Machine - this is an MD5 hash of one of several options, some of which are more easily determined than others, but highly dependent on the environment
PID - again, not a huge number of values here, and could be sleuthed for data generated from a known source
Increment - if this is a random number rather than an increment (both are allowed), then it is less predictable
To expand a bit on the sources. ObjectIDs can be generated by:
MongoDB itself (but can be migrated, moved, updated)
The driver (on any machine that inserts or updates data)
Your Application (you can manually insert your own ObjectID if you wish)
So, there are things you can do to make them harder to guess individually, but without a lot of forethought and safeguards, for a normal data set, the ranges of valid ObjectIDs should be fairly easy to work out since they are all prefixed with a timestamp (unless you are manipulating this in some way).
Mongo's ObjectId were never meant to be a protection from brute force attack (or any attack, for that matter). They simply offer global uniqueness. You should not assume that some object can't be accessed by a user because this user should not know its id.
For an actual protection of your resources, employ other techniques.
If you defend against an unauthorized access, place some authorization logic in your app (allow access to legitimate users, deny for everyone else).
If you want to hinder dumping all objects, use some kind of rate limiting. Combine with authorization if applicable.
Optional reading: Eric Lippert on GUIDs.

How do I add a set of strings to an Entity?

This is a simple requirement: I want to add a set of strings to Accounts in Dynamics 2011. The string are external IDs for other systems. All the strings should be unique accross all entities.
The only way I can see to do this is define the strings as entities (say 'ExternalCode') and set up a 1:N reslationship between Account and ExternalCode, but this seems incredibly overweight. Also, defining as an entity insists thhat I give the 'ExternalCode' a name, which it obviously doesn't have.
What's the best way to implement this?
Thank you
Ryan
It may seem overweight, but think about entities as if it were tables. Would you create a second table inside MS SQL? If so, then you should create another entity. CRM is very well optimized so I wouldn't worry about this additional overhead.
Alternatively, you could always carry the GUID in the other system.
How are these unique references entering your CRM system. Are you importing the data from each of the external systems? If so I assume the references are unique in the external system? Once imported you want to make sure that any of these references are not duplicated?
Additionally, how many strings are we talking about here? If it is a small number then it would make sense to just define attributes to manage them and check for duplicates in one of the following ways:-
1) Some javascript could be used to make an oData query to confirm the 'uniqueness' of your external reference number before the record is commited. (But, this is not sufficient is records will be created programmatically in the system also).
2) A plug-in which fires on pre-create to again query the system for other records which match the same unique reference numbers and handles the event of a match accordingly.
However, if there are many of them then it may make more sense to define a separate entity as you say and then as above you could associate a new 'reference record' with the entity via a plug-in, but again, check if the record already exists and then either handle an exception or merely associate with an existing record if that is appropriate.
I think they key is what you want to do if you do find a duplicate and how these records are going to be created in the system (e.g. via UI or programmatically or potentially both).
Happy to provide some more assistance if you have some more details.

Resources