I have query related to bulk row fetching using CRecordSet (MFC ODBC).
On the MSDN page, it is written that
The member functions IsDeleted, IsFieldDirty, IsFieldNull, IsFieldNullable, SetFieldDirty, and SetFieldNull cannot be used on recordsets that implement bulk row fetching. However, you can call GetRowStatus in place of IsDeleted, and GetODBCFieldInfo in place of IsFieldNullable.
Now, I want to check whether a field contains "NULL"/"has no value" data. How can I check this as the IsFieldNull function does not work in bulk row fetching?
There is the difference between IsFieldNull and IsFieldNullable function.
So logically you will not be able to know whether a filed is null for a particular row since you are doing bulk row fetching. But you can only determine whether a particular field is nullable which simply means if that field is capable of accepting null values.
The CODBCFieldInfo structure contains information about the fields in an ODBC data source.
It has a member called m_nNullability which identifies Whether the field accepts a Null value. This can be one of two values: SQL_NULLABLE if the field accepts Null values, or SQL_NO_NULLS if the field does not accept Null values.
So pass the object of CODBCFieldInfo structure to CRecordset::GetODBCFieldInfo function which collects the object by reference. So no need to worry, you will get the updated value back and then check the member m_nNullability value of that object to only know whether the filed is nullable and not whether a field for a particular row is null.
http://msdn.microsoft.com/en-us/library/xexc6xef(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/k50dcc9s(v=vs.80).aspx
CRecordset::GetODBCFieldInfo function has two versions. One version of the function lets you look up a field by name. The other version lets you look up a field by index.
Related
So for example we have field 1 up to 10. I want to index all the field in Azure Search, so you can filter, search on those filters.
My Question is, is there a way to just exclude the fields that are NULL from a specific ID, so not store them in Azure search? See example underneath.
The data itself is initially stored in Azure Cosmos Database.
In Azure Cosmos DB it would like this:
Id 1
field 1: a
field 2: b
field 5: c
field 6: d
field 8: e
Id 2
field 3: a
field 2: b
field 5: c
field 9: d
field 10: e
However in Azure Search Index, it looks like this:
Id 1
field 1:a
field 2:b
field 3:NULL
field 4:NULL
field 5:c
field 6:d
field 7:NULL
field 8:e
field 9:NULL
field 10:NULL
Id 2
field 1:NULL
field 2:b
field 3:a
field 4:NULL
field 5:c
field 6:NULL
field 7:NULL
field 8:NULL
field 9:d
field 10:e
The shortest answer to your question is "no", but it's a little deeper than that.
When you add documents to an Azure Cognitive Search index, the values of each field are stored in a data structure called an inverted index. This stores a dictionary of terms found in the field, and each entry contains a list of document IDs containing that term. It is somewhat similar to a column-oriented database in that regard. The null value that you see in document JSON is never actually stored in the inverted index. This can make it expensive to test whether a field is null, since the query needs to look for all document IDs not contained in the inverted index, but it is perfectly efficient in terms of storage (because it doesn't consume any).
This article has a few simplified examples of how inverted indexes work, although it's about a different topic than your question.
Your broader concern about having many fields defined in your index is a valid one. There is a tradeoff between schema flexibility and resource utilization as you increase the number of fields in your index. However, this is due to the bookkeeping overhead required for each field, not the "number of nulls in the field" (which doesn't really mean anything since nulls aren't stored).
From your question, it sounds like you're trying to model different "entity types" in the same index, resulting in a sparse index where some subset of the documents have one subset of fields defined, while another subset of documents have different fields defined. This is a scenario that we want to better support in the service. One promising future direction could be supporting multi-index query, so each subset of your schema could have its own index with its own distinct (but perhaps overlapping) set of fields. This is not on our immediate roadmap, but it's something we want to investigate further. Please vote on this User Voice item to help us prioritize.
As far as not saving the null values, AFAIK it is not possible. An index in Cognitive Search has a pre-defined schema (much like a relational database table) and based on an attribute's data type an attribute's value will be initialized with a default value (null for most of the data types).
If your concern is storage, it's not a problem since it's an inverted index.
If you have an issue with the complexity of the JSON data returned, you could implement your own intermediate service that just hides all NULL values from the JSON. So, your application queries your own query service which in turn queries the actual Azure service. Just passing along all parameters as-is. The only difference is that your service removes both the key/value from the JSON to make the responses easier to manage.
The response from search would then appear to be identical to your Cosmos record.
I'm looking at a table (Table1) inside an Excel book saved on my OneDrive for Business account. I then want to get the maximum value in the CREATEDDATE column from this table.
I want to avoid pulling down the whole table with the API, so I'm trying to filter the results of my query to only the CREATEDDATE column. However, the column results from the table are not being filtered to the one column and I'm not getting an error to help troubleshoot why. All I get is an HTTP 200 response and the full unfiltered table results.
Is it possible to filter the columns retrieved from the API by the column name? The documentation made me think so.
I've confirmed that /columns?$select=name works correctly and returns just the name field, so I know that it recognizes this as an entity. $filter and $orderby do nothing when referencing any of the entities from the response (name, id, index, values). I know that I can limit columns by position, but I'd rather explicitly reference the column by name in case the order changes.
I'm using this query:
/v1.0/me/drive/items/{ID}/workbook/tables/Table1/columns?$filter=name eq 'CREATEDDATE'`
You don't need to $filter here, just pull it by the name directly. The prototypes from the Get TableColumn documentation are:
GET /workbook/tables/{id|name}/columns/{id|name}
GET /workbook/worksheets/{id|name}/tables/{id|name}/columns/{id|name}
So in your case, you should be able to simply call call:
/v1.0/me/drive/items/{ID}//workbook/tables/Table1/columns/CREATEDDATE
I am trying to divide values and display it in a new custom field(usrQuantity)on Stock Items Screen.
I want to divide OpenQty (which is a column in POLine) and CARTONQTY(which is not a column name but just an attribute in column AttributeID in CSAnswers table).
enter image description here
I am confused how to perform this division since CARTONQTY is not a field, I noticed that there is a field named CARTONQTY_Attributes in InventoryItem table which has been generated by some Join queries but is not actually present in the Database(checked in SQL Management Studio).
enter image description here
I tried this formula in the DAC of usrQuantity
[PXDBInt]
[PXUIField(DisplayName="Quantity")]
[PXFormula(typeof(Div<POLine.orderQty,InventoryItem.CARTONQTY_Attributes>))]
But it is giving following errors
The type or namespace name 'POLine' could not be found (are you missing a using directive or an assembly reference?)
The type name 'CARTONQTY_Attributes' does not exist in the type 'PX.Objects.IN.InventoryItem'
Do you want to store the value in the db? Typically it's not recommended practice to persist calculated values unless, for example, there's a performance issue in performing the calculation on the fly. If you don't want to store it, you probably want a PXInt instead of PXDBInt. Also, unless you always expect a whole number as a result of your division (which is unlikely), you should probably use a PXDecimal type.
To then get your calculated value into your new field, I would probably set it in RowSelecting and RowUpdated event handlers by extending the appropriate PXGraph class; in order to calculate it as you retrieve a row and when you update the row values.
Working in MS Access 2010 and expecting to receive 1,000s of changes in Excel format that I need to import into a personnel database. I've been tasked with "automating" the update process but could really use some help.
The primary table has 12 fields that could each change for each change form submitted. We have designed a macro to upload the Excel files but some of the fields on the change form will be blank, resulting in incomplete employee records (e.g. original employee record has all 12 records filled in, but change record only has 1).
Is it possible to write a query or macro to fill in the most recent employee record's empty or NULL values with the non-NULL values from the previous entries?
If I understand correctly, you want to retain the value in the 'primary' table if the value in the 'change' table is null. In that case the following should work
UPDATE <primaryTable> INNER JOIN <changeTable> ON <primaryTable>.<keyField> = <changeTable>.<keyField> SET <primaryTable>.<Field1> = nz(<changeTable>.<Field1>,<primaryTable>.<Field1>), <repeat for each field to update>
Just be sure you are dealing with nulls and not empty strings, which is common in Excel imports. In that case you need to either change the empty strings to nulls or use an IIF statement instead of the nz function.
I want to use OrderBy in SPSiteDataQuery to sort items by data, however, the field containing the date differs between the content types.
Can this be solved by sorting with a calculated field? I am currently trying to create a calculated field that checks for existence of a field (using ISERROR), if it is found it returns the value, otherwise returns a default value. Or perhaps I can create a Calculated field in parent content type, then override it's formula and field references in a child content type - would such polymorphism work?
As I found out - NO, it can't.