How do I generate the unique number in replication copy? [duplicate] - lotus-notes

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to create an auto incrementing field in lotus domino?
We are generating the unique sequence number for each document like an employee ID.
But user can enroll the information in different locations.
So we replicate the database into many.
Problem is, Number is generating. But the sequence number gets duplicated when the user works on different replicas.

If you must use sequential numbers, you should have the database assign a temporary number to the document when it is created and then have only one server execute an agent that assigns permanent sequential numbers to the documents on a daily or more frequent basis.
However, most of the time people just need UNIQUE numbers assigned to the documents. Using the #Unique formula generates a unique string to identify a document. Or, you could have it assign sequential numbers that include the server name as a prefix. You could use a combination of date-time and server or user information to create a unique identifier as well.
My experience is that most of the time when people say they have a requirement for sequential numbers, they're wrong, they just need unique numbers and think that sequential is the only way to do it.

Related

how to display if customer has a single transaction or multiple transactions

I would like to ask a question on how to correctly display single or multiple transactions if a user has more than one purchase from different stores in day. If a user bought once it will display "single" and if the user bought more than once it will display "multiple"
For reference, column F is for determining if the user bought more than once, 0= first transaction and 1= other transactions. If a user has only 0 and no 1s then it is considered single transaction only. I tried using this formula in column H:
=IF(COUNTIF(A$2:A2,A2)=1,
IF(COUNTIFS(A$2:A2,A2,D$2:D2,D2)>1,
OFFSET(A$2,MATCH(A2&D2,$A$2:A2&A2&$D$2:D2,0)-1,1),
MAX(IF($A1:A$2=A2,$F1:F$2))+1))
but, it was not showing the result I want.
If I understand your question correctly, it should be possible to use SUMPRODUCT to accomplish this.
Try: =IF(SUMPRODUCT(--(A:A=A2),--(D:D=D2))>1,"mutiple","single")

Arcade expression to calculate unique IDs in ArcGIS Pro

Arcade expression to calculate unique ID in ArcGIS Pro
I have a field that I want to be automatically populated with a unique ID whenever a new record is created. I'm pretty rusty at working with Arcade and Attribute Rules. I've figured out how to make a number automatically populate through a sequence and attribute rule, but I don't know how to make the rule take into account the values already in the table.
Using NextSequenceValue, the rule will add an ID that is unique to the new sequence that I created, but it is not unique from the other records that already have IDs. This is an old dataset with loads of different IDs that don't necessarily follow a predictable pattern, otherwise I would just choose my sequence start appropriately (some IDs are in the 100s, some in the 1000s, some even 100,000s, etc.).
I basically want to perform a check where the rule assesses if the ID is unique and if it's not, it adds 1 or something until it is unique.
I tried using a sequence but it doesn't take into account already existing IDs so they aren't truly unique.

Access - Calculated field (running average)

I am trying to generate an Access database with information which is currently in endless sheets and tables in Excel.
I would like to know if there is any way to add a field to one table which is a calculation (average value) based on several other cells.
I need to calculate the running 6 months average value of another field which contains 1 value per month.
Hopefully the previous image shows what I mean.
What is the best approach to import this functionality into access?
You wouldn't normally store a calculated field in Access, you would run a query that provides you the calculation on the fly.
Without seeing your data structure it is impossible to tell you how to calculate the answer you need, but you would need your data correctly normalised in order to make this simple.

Cassandra schema to find if a group exists based on an set of users as input

I am trying to define a Cassandra schema for the following use case: Each unique set of users defines a group. The query pattern requires a quick way to find if a group exists based on an set of users as input.
Since there is very little information given, I will make some best-case assumptions here. I am assuming there is a unique way of identifying a user using a fixed length N-bit hash (let's call it uid). I am also assuming that the max number of users (MAX) in a group would be such that (MAX < 64*1024*8 / n). This is because Cassandra has 64KB limit on key length). In real terms this means that if you have up to 32k users, you could form any group up to the max number of users.
Given the above, I would say that a sorted concatenation of the uids would be an easy way to identify the group and the group can be keyed as such.
In that case, a single lookup by the sorted concatenated key formed by the query set of users would give you the answer if you get a hit.
Let's say
key of G1 = u04,u08,u10,u12;
key of G2 = u01,u11,u12;
...
Key of GN = u09,uxx,uyy;
If searching whether a group containing users u04, u08, u03, exists, simply create a key "u03,u04,u08" and try and find a hit in the "Groups" column family.
If you are working with a larger user-set with larger users per group, then a different approach may be needed.
EDIT: Can you give a sense of maximum how many users may form a group. I assume your client would have to pass a list of all those users as part of he query.

Matching "fuzzy" data based on several inputs

I have a search and matching problem:
Inputs
In my database, I have thousands of names, in addition to some other matching characteristics: a few columns of numerical data, and a few columns of other text that helps identify this specific company.
A prospective client has about 500 company names, and then sparsely populated additional characteristics as mentioned above for each of the names.
Current Process
In the past, the process has been a manual one, try to match each name given by the client by searching through the database, finding a name "like" the one reported to me, and then verifying that the additional characteristics match up. However, the main issue is that the names reported are not the same, can often contain abbreviations or only parts of the name stored in my database, and the additional characteristics may be incomplete or only partially matching as well.
Automation
I want to automate this process since it happens frequently. The optimal solution would input one company from the client list along with any of the additional characteristics they filled in for it, and then try to find the top 5 matches in my database.
I've never used Lucene or Sphinx, but they seem to be more document driven. Is there a way to format these inputs so those libraries work for this problem, or instead, what other software tools exist that would work?
To Lucene, a 'document' can easily be a row in a table and I think you will like the fuzzy~ search and search hit scoring capabilities.

Resources