I have an entity that has three fields that need to form a unique constraint in my CRM 2011 Organizations, but when I enter them in for a Duplicate Detection Rule, the resulting matchcode length is too long.
At first I was going to just add an odata query in javascript on the form to ensure that no record existed for the unique constraint, but that doesn't catch data import issues.
Is there some way to get around the 450 character limit, or am I most likely going to need to create a plugin?
Using a new field, that contains the values of the 3 fields you want to create the duplicate detection rule, may be an option. You maintain the state of this field with a workflow (on create/update) and apply the duplicate detection rule on it (if the new field does not exceed the limit of the matchcode).
The approach with the plugin may be another choice if the above is not a convenient solution for your scenario.
You can choose to only use part of of one or more of your field eg the first 150 characters - do you really need to be checking the whole of these long fields for absolute uniqueness?
In how many cases would the first 150 characters of each of three fields be identical but not the last bit (which would be the false-positives this causes)?
Related
I'm trying to speed up my ssp application by using nlapiLookupField where possible instead of having to load the whole record and it's sublists using nlapiLoadRecord. Unfortunately it doesn't seem to work with lineitem fields. Is there an api call to fetch a line item's value with out the needing to use nlapiLoadRecord?
I'm using 1.0 as dictated by SCA.
nlapiLookupField() is limited to body fields, however you can use other search apis (eg: nlapiSearchRecord()) to return any information that a saved search can access, which obviously includes item lines. This is particularly useful if you want to read a few fields from a large number of records, but I believe it's performant compared to loading a record even if you just return a single result, say by passing in an internal id as one of the filters. I haven't tested to compare a single result search with a single record load though, so YMMV.
Unfortunately, no. Only body fields are supported with nlapiLookupField or search.lookupFields.
Background is that I am trying to eliminate duplicated documents that have arisen (from a rogue replication?) in a huge database - tens of thousands of docs. I am composing a view field that uniquely identifies a document so that I can sort on that field and remove the duplicates via LotusScript code. Trouble is that there is a rich text "Body" field that contains most of the action/variability and you can't (AFAICS) use #Abstract in a view column...
Looking around for an alternative I can see that there is a system variable "size" which includes any attachments size, so I am tentatively using this. I note that duplicated docs (to the eye) can differ in reported size by up to about 30 bytes.
Why the small differences? I can code the LS to use a 30-byte leeway, but I'd like to know the reason for it. And is really 30 bytes or something else?
Probably document's item $Revisions has one more entry. That means that the document was saved one more time.
If the cause of this "copy thousands of documents" accident was replication then you might be lucky that documents contain an item $Conflict. This item contains the DocumentUniqueId of the original document and you could pair them this way.
According to the help pop up:
ID
This field's value represents the script ID, used to identify this
record for scripting purposes. It is a text field.
Internal ID
This field's value is a read-only system-generated unique identifier.
It is an integer field.
Both fields seem to uniquely identity a record type.
One is a string, one a integer.
The string ID is used for searches and
loading of records, but I've also seen Internal ID used when
referring to a record type from a lists point of view.
Can anyone provide the reasoning behind having two identifiers and when to use one versus the other when scripting?
The major difference is that you (as the creator of a custom record or script) are in complete control of the text ID. You can establish patterns and best practices for defining these IDs, and it will make it very easy for developers to identify record types just by looking at the string ID. You have no control over the numeric ID. When looking at code, it is much easier for me to determine what records I am referring to if it looks like:
nlapiSearchRecord('customrecord_product', null, filters, columns);
nlapiResolveURL('SUITELET', 'customscript_sl_orderservice', 'customdeploy_sl_orderservice')
as opposed to looking at:
nlapiSearchRecord(118, null, filters, columns);
nlapiResolveURL('SUITELET', 13, 1)
I'm not even sure the second nlapiSearchRecord actually works, but I know that nlapiResolveURL can be written that way.
That said, if you simply let NetSuite generate the text ID, you'll end up with generic IDs like customrecord1, which I find no more useful than the numeric ID. It is a good practice to explicitly specify your own IDs.
Furthermore, the numeric ID can vary between environments (e.g. Sandbox could be different than Production, until a subsequent refresh occurs). If you are following good migration practices, then the text ID should never vary between environments, so your code would not have to make any kind of decision on which ID to use based on environment.
Rarely have I found myself referencing any record, whether native or custom, by its numeric ID; scripts are always using the text ID to reference a record type.
I created a basic OOTB document library to store Word and PDF files. I have been tasked to also create a few columns to store some basic metadata about the uploaded documents, for example: AuthorFirstName, AuthorLastName, and a column that lists "topics" discussed in the document.
While I am generally familiar with most Document Library settings, and creating columns, I am seeking information on what column datatype might work best for "topics". In most situations, one uploaded document would have 1-4 topics.
I would rather the datatype not be a single line of text datatype, as I would rather not ask the user to separate the different values (topics) using a delimiter such as a comma or semicolon. I would like to offer users the option to sort or filter in the SharePoint views.
There also seem to be some limitations with the Choice datatype.
While Choice fields seem to support Fill-In Values, when a choice is not pre-populated, they only seem to allow 1 fill-in. I would like the user to able to use a repeating-table-like interface to add a topic, and click an "add" button, and repeat, and so on.
I think in your scenario the best approach would be using managed metadata feature (http://office.microsoft.com/en-us/sharepoint-help/introduction-to-managed-metadata-HA102832521.aspx). It allows you to sort/filter library items, allows users to add new terms into metadata storage, etc.
Using a Lookup field to a custom List is something worth considering. The main advantage is that your data choices are stored separately from the main list and are easier to track and manage. The disadvantage is that you cannot easily have the user add a fill-in option as you desire. You would have to have a link from the library or the upload form to the options list where they would enter a new option separately from tagging it on the document.
Managed metadata is certainly an option as well, but it requires more overhead and sorting/filtering on that is a little trickier. Using a Lookup column is simple, although it does not meet all of your needs.
I need to work with an existing (MySql) db, where the names of tables and columns are already defined.
If I understand the documentation properly (and I didn't find good documentation on this subject, so links will be highly appreciated), table names are related to PersistIdentity, and must therefore begin with a capital letter (which is not the case I'm facing).
Column names, however, are automatically un-capitalized (at least that's what is implied in the Yesod book, Persistent chapter, in the code snippet describing the code automatically generated from declarations), so columns in the DB must begin with a lowercase letter.
Is the description above indeed true?
Can I control specifically the mapping of tables to identities and columns to fields?
If not then what are the rules automatically applied for the naming? What names are therefore forbidden?
Also, one of the fields is a VARCHAR(30). How can I communicate that to Persistent? It currently complains (through yesod devel) that:
errMessage = "BLOB/TEXT column 'my_field' used in key specification without a key length"}
Which is the result of auto-migration (which I probably should disable anyway). However, if I do want to declare a bounded VARCHAR field - can I do that through Persistent and its auto-migration tool?
Thanks,
I'm not an authority on the MySQL backend, but IIRC (and based on the code), you can control the maximum length by using the maxlen=... attribute. Similarly, you can have direct control of the name the field will have in the database by using the sql=... attribute. So, for example, the following might work:
Person sql=people
name Text sql=full_name maxlen=40
age Int
I also agree that you should disable the automigration code if you're dealing with a pre-existing schema.