CouchDB views: total_rows vs offset vs rows? - couchdb

I am making a POST request to a CouchDB with a list of keys in the body.
This is a follow up on a previous question asked on Stack Overflow here: CouchDB Query View with Multiple Keys Formatting).
I see that the result has 711 rows returned in this case, with an offset of 209. To me an offset means valid results that have been truncated - and you would need to go to the next page to see them.
I'm getting confused because the offset, rows, and what I actually get does not seem to add up. These are the results that I'm getting:
{
total_rows: 711,
offset: 209,
rows: [{
id: 'b45d1be2-9173-4008-9240-41b01b66b5de',
key: 2213,
value: [Object]
}, {
id: 'a73d0b13-5d36-431f-8a7a-2f2b45cb480d',
key: 2214,
value: [Object]
},
etc BUT THERE ARE ONLY 303 OBJECTS IN THIS ARRAY????
]
}

You have not supplied the query parameters you are using so I'll have to be a little general.
The total_rows value is the total number of rows in the view itself. The offset is the index in the view of the first matching row for the given query. The number of rows matching the query parameters are returned in the rows array, the total of which are trivial to obtain.
If there are no entries in the view for a direct key query, the offset value is the index into the view where the entry would be if it had the desired key.

It would seem that the offset refers to the number of documents BEFORE the first document that matches the key criteria is found.
and then the rows are all the documents that match the criteria.
i.e. rows returns all the documents that match the key criteria, and offset tells you what 'index' within all the docs returned by the view that the first document that matches the key criteria was found.
Please let me know if this is not correct :)

Related

Get PartitionedList for partition with more than 2000 documents

I have a partitioned Cloudant database (on the free tier) with a partition that has more than 2000 documents. Unfortunately, running await db.partitionedList('partitionID') returns this object:
{
total_rows: 2082,
offset: 0,
rows: [...]
}
where rows is an array of only 2000 objects. Is there a way for me to get those 82 remaining rows, or get a list of all 2082 rows together. Thanks.
Cloudant limits the _partition endpoints to returning a maximum of 2000 rows so you can't get all 2082 rows at once.
The way to get the remaining rows is by storing the doc ID of the last row and using it to make a startkey for a second request, appending \0 to ask the list to start from the next doc ID in the index e.g.
db.partitionedList('partitionID', {
startkey: `${firstResponse.rows[1999].id}\0`
})
Note that partitionedList is the equivalent of /{db}/_partition/{partitionID}/_all_docs so key and id are the same in each row and you can safely assume they are unique (because it is a doc ID) allowing use the unicode \0 trick. However, if you wanted to do the same with a _view you'd need to store both the key and id and fetch the 2000th row twice.

Is there a vbo to get value from a collection based on value of other fields and save it as a data item?

Relatively new to Blue Prism,
I have a collection that looks like this, with 100+ rows:
Results
Answer
Timestamp
8 Apr 2021
Name
ABC
I'd like to manipulate the data such that if Results = 'Name', Get the Answer (aka ABC) and put it into a data item.
Is there any way to do this?
I understand I could hardcode i.e. Get value based on Row Index and Column Index, but my data is complex and may not always have the same rox index.
Can you use the collection filter to get a collection output? The utility has an action to filter where you can input a collection and then use
[FieldName] Like "some value"
This would result in every complete row in the collection that matches the filter.

Padding in a sharepoint calculated field

Is there a way to pad in a calculated field to get a final result length to be standard?
lets say I want my calculated field result to be 8 characters long beginning with MTX then enough zeros to pad and then the ID of the record.
So if the record ID is 23 then the result would be MTX00023
Use the TEXT function. Set this as the formula for the calculated field: ="MTX"&TEXT(ID,"00000").
Important note about using the ID field in a calculated field: When the item is created, its ID is not yet available. After creating an item, you will need to edit it so that the calculated field is updated with the proper ID.

NetSuite get transactions that do not contain items with specified attributes

I am attempting to create a list of open pending approval sales orders that do not contain items with specific values defined in a custom field. I am able to do this when the sales order contains only items that meet this criteria. However, when their are two items and one meets while the other does not my search is no longer valid.
I have two sales orders. Sales order 123 has a shipping method of Ground, while Sales order 321 has an item with Shipping method of Ground and shipping method of Freight. I expect to get only Sales order 123 returned.
I made this formula in criteria section:
CASE WHEN {item.custitem_shippingmethod} = 'Freight' Or {item.custitem_shippingmethod} = 'Free Freight' THEN 1 ELSE 0 END
but got both orders returned. I tried using the same formula in the summary criteria but that also did not work. Any suggestions?
Picture of Criteria in NetSuite
Thank you!
You could potentially use summary criteria. It's practical but it's not the cleanest looking search. You need to have a corresponding formula column in your results for it to work:
Group by Document Number.
Create a formula (Numeric) result column with summary type of Sum using your above formula.
Create a summary criteria of type formula (Numeric) with summary of type Sum
and use the same formula and set the value to be less than 0.
This will return only records that do not include those shipping
methods.
Alternatively, have you considered running the logic (workflow/suitescript) when the record is saved and storing a checkbox value such as "Does not include freight"? It would make searches based on that criteria easier.
For example if you store the ship method on the line, something like:
// Set your freight method indexes
var freightMethods = ['1','2']
var itemLinesCount = nlapiGetLineItemCount('item');
// If a line is found with one of the freight methods you're looking for then mark the record.
for(var i = 1; i < itemLinesCount; i++)
{
var shipMethod = nlapiGetLineItemValue('item', 'custcol_shipmethod', i);
if(freightMethods.indexOf(shipMethod) !== -1)
{
nlapiSetFieldValue('custbody_includes_freight', 'T');
break;
}
}
If you store the ship method only on the item record it can be a bit trickier to manipulate (due to the way Netsuite handles item record types).
Does the line being returned have a freight value or are you getting another line from the same order?

is a row in Cassandra same as key->value where value is a super column

I am trying to create a mental model of data model of Cassandra. What I have got so far is that the basic unit of data is a column (name, value, timestamp). A super-column can contain several columns (it has name and its value is a map). An example of ColumnFamily (which I suppose contains several entries of data or rows) is
UserProfile = { // this is a ColumnFamily
phatduckk: { // this is the key to this Row inside the CF
username: "phatduckk", //column
email: "phatduckk#example.com", //column
phone: "(900) 976-6666"//column
}, // end row
ieure: { // another row in same CF. this is the key to another row in the CF
username: "ieure",
email: "ieure#example.com",
phone: "(888) 555-1212"
age: "66", // a differnet column than previous one.
gender: "undecided" // a differnet column than previous one.
},
}
Question 1- To me it seems that a row in CF is nothing but a key-value pair where value is a super-column Am I correct?
Question 2- Could the value (of row key) be a map of several super columns?What I am thinking is say I want to create a row with User's name and address then the row could be key (user id) and value maps to two super columns, C1 (firstname, last name) and C2 (street, country)
I think your trying to wrap head around the (very) old nomenclature that was renamed to make it less confusing.
Table
{
partition key: { // partition
clustering: { // row
key: value // column
key2: value // column
key3: value // column
}
clustering2: { // row
key: value // column
...
}
...
}
...
}
partitions are ordered by the murmur3 hash of the key and used to determine which hosts are replicas. The clustering keys are sorted within them, and theres a fixed schema for the fields within a row which each has a column.
Using super column family, column family, supercolumns, columns and row nomenclature is just going to get yourself confused when you read anything thats come out in last 6 years. Thrift has even been deprecated as well for what its worth so don't plan your application around that.
For your questions
Question 1- To me it seems that a row in CF is nothing but a key-value
pair where value is a super-column Am I correct?
yes, but the super columns are sorted by their keys. ie phatduckk would be after ieure if they are text types using descending order. That way you can read a slice of names between ph and pk for instance and pull them off disk (more useful when clustering on a timestamp and looking for ranges of data).
Question 2- Could the value (of row key) be a map of several super
columns?What I am thinking is say I want to create a row with User's
name and address then the row could be key (user id) and value maps to
two super columns, C1 (firstname, last name) and C2 (street, country)
You should really look at some newer documentation. I think you have right idea but hard to relate exactly with how C* works now. Try starting with
https://academy.datastax.com/resources/ds101-introduction-cassandra
https://academy.datastax.com/resources/ds220-data-modeling
as some free courses that do a good job explaining.

Resources