AZURE FHIR: How _lastUpdated transformed to ResourceSurrogateId - azure

Continuing my research over AZURE FHIR (Sql Version), I found that the lastUpdated search parameter somehow is corelated to the ResourceSurrogateId field (in the Resource table).
Moreover, I believe both are the same but in different formats. However, I did not find any source explaining how the lastupdated (a date-time value) is transformed into the ResourceSurrogateId (numeric Value).
Can somebody explain me how can I get a ResourceSurrogateId based on its original date-time value?
Test:
I run the followig FHIR REST API: https://XXXXXXXXXXX/DeviceComponent?_lastUpdated=gt2019-07-01 and the actual query on the database was:
FROM dbo.Resource r
WHERE ResourceTypeId = #p1
AND ResourceSurrogateId >= #p2
AND IsHistory = 0
AND IsDeleted = 0
ORDER BY r.ResourceSurrogateId ASC
OPTION(RECOMPILE)',N'#p0 int,#p1 smallint,#p2 bigint',#p0=11,#p1=32,#p2=5095809792000000000```

The resource surrogate ID encodes the datetime of insert to millisecond precision along with a "uniquifier" that comes from a cycling sequence on the database. The code that converts from a surrogate ID to datetime and back is here.
You will not be able to get the surrogate ID from the datetime, but you will be able to get lower and upper bounds on it.

Related

send datetime with offset field in Stream Analytics

I'm trying to send a Timestamp field which is ISO 8601 with offset (
"2023-02-01T11:11:12.2220000+03:00" )
Azure doesn't really work with offsets, I first encountered that when sending data to event hub.
I was hoping to resolve this by splitting timestamp field into 2 fields:
timestamp: 2023-02-01T11:11:12.2220000
offset: +03:00
and combining them is SA query.
This seemed to have worked in Query editor, where test output is shown as a correct timestamp+offset
however when data is sent to output (in this case SQL, field type datetimeoffset), value looks like this:
2023-02-01T08:11:12.2220000+00:00
I suspect this is because timestamp field type in SA is datetime (seen in query explorer test results window)
even if I cast to nvarchar field type is still datetime.
is there a way to force SA to use specific types for fields (in this case, treat field as a string and not datetime)?
or, in general, how pass value like "2023-02-01T11:11:12.2220000+03:00" through SA without altering it? bonus points if it can be done in Event Hub as well

Return the item number X in DynamoDB

I would like to provide one piece of content per day storing all items in dynamoDB. I will add new content from time to time but only one piece of content needs to be read per day.
It seems it's not recommended to have incremental Id as primary key on dynamoDB.
Here is what I have at the moment:
content_table
id, content_title, content_body, content_author, view_count
1b657df9-8582-4990-8250-f00f2194abe9, title_1, body_1, author_1, view_count_1
810162c7-d954-43ff-84bf-c86741d594ee, title_2, body_2, author_2, view_count_2
4fdac916-0644-4237-8124-e3c5fb97b142, title_3, body_3, author_3, view_count_3
The database will have a low rate of adding new item has I will add new content myself manually.
How can I get the item number XX without querying all the database in nodeJS ?
Should I switch back to a MySQL database ?
Should I use a homemade auto increment even if it's an anti pattern ?
Should I used a time-based uuid, and do a query like, get all ids, sort them, and get the number X in the array ?
Should I use a tool like http://www.stateful.co/ ?
Thanks for your help
I would make the date your hash key, you can then simply get the content from any particular day using GetItem.
date, content_title, content_body, content_author, view_count
20180208, title_1, body_1, author_1, view_count_1
20180207, title_2, body_2, author_2, view_count_2
20180206, title_3, body_3, author_3, view_count_3
If you think you might have more than one piece of content for any one day in future, you could add a datetime attribute and make this the range key
date, datetime, content_title, content_body, content_author, view_count
20180208, 20180208101010, title_1, body_1, author_1, view_count_1
20180208, 20180208111111, title_2, body_2, author_2, view_count_2
20180206, 20180208101010, title_3, body_3, author_3, view_count_3
Its then still very fast and simple to execute a Query to get the content for a particular day.
Note that due to the way DynamoDB distributes throughput, if you choose the second option, you might want to archive old content into another table.

Azure Logic App SQL ODATA Filter on Date

I'm creating a new Logic App that reads a table where DateCreated < ADDDAYS(-60,GETDATE()) and updates an Archived bit to 1.
However, I can't for the life of me figure out how to implement that filter as part of the ODATA query.
Here's what I'm trying so far:
DateCreated lt addDays(utcNow(),-60)
However, I get "An unknown function with name 'utcnow' was found. This may also be a function import or a key lookup on a navigation property, which is not allowed.\r\n inner exception: An unknown function with name 'utcnow' was found. This may also be a function import or a key lookup on a navigation property, which is not allowed."
How can I filter on a dynamic date in the filer?
However, I can't for the life of me figure out how to implement that filter as part of the ODATA query.
I suppose you mean on the ODATA Query on the SQL Connector?
Can you try the following:
DateCreated lt #{addDays(utcNow(),-60)}
Based on the previous answer, you should try the same command :
DateCreated lt #{addDays(utcNow(),-60)}
But also, you must ensure that your data type, on the SQL side, must be a datetimeoffset.
Three solutions to do this :
Change the type of your field in your table,
Create a view and cast the field DateCreated to DATETIMEOFFSET
CREATE VIEW [dbo].[myview] AS
SELECT MyFields, ..., CAST(DateCreated AS DATETIMEOFFSET) AS DateCreated
FROM MyTable
Create a store procedure with a DATETIMEOFFSET parameter, and convert the parameter to a DATETIME
If you can not change your SQL code, this piece of code is the solution :
year(DateCreated) lt year(#{addDays(utcNow(),-60)}) or (
year(DateCreated) eq year(#{addDays(utcNow(),-60)}) and (
month(DateCreated) lt month(#{addDays(utcNow(),-60)} or (
month(DateCreated) eq month(#{addDays(utcNow(),-60)}
... <same thing for other date parts>
)
)
)
You have to compare each part of your date :
This is an interesting issue that sometimes shows up when dates,
times, datetimes, and specific time zone come into play. Comparing a
DateTimeZone to a date is problematic, because it might be less in
arithmetic terms but only if the time zone matches... without that
critical piece of information, these data types cannot be compared.
One alternative is to use the standard OData functions to retrieve
parts of the data type. For example:
$filter = year(release_date) lt year(dtz)
Of course, you must be carful to ensure that you are implementing the
correct logic with respect to timezone- but you are probably aware of
that.
OData Reference :
http://www.odata.org/documentation/odata-version-2-0/uri-conventions/

Forcing a string field to DateTime in WCF query with Azure Table Storage

So, a quick overview of what I'm doing:
We're currently storing events to Azure Table storage from a Node.js cloud service using the "azure-storage" npm module. We're storing our own timestamps for these events in storage (as opposed to using the Azure defined one).
Now, we have coded a generic storage handler script that for the moment just stores all values as strings. To save refactoring this script, I was hoping there would be a way to tweak the query instead.
So, my question is, is it possible to query by datetime where the stored value is not actually a datetime field and instead a string?
My original query included the following:
.where( "_timestamp ge datetime'?'", timestamp );
In the above code I need to somehow have the query treat _timestamp as a datetime instead of a string...
Would something like the following work, or what's the best way to do it?
.where( "datetime _timestamp ge datetime'?'", timestamp );
AFAIK, if the attribute type is String in an Azure Table, you can't convert that to DateTime. Thus you won't be able to use .where( "_timestamp ge datetime'?'", timestamp );
If you're storing your _timestamp in yyyy-MM-ddTHH:mm:ssZ format, then you could simply do a string based query like
.where( "_timestamp ge '?'", timestamp );
and that should work just fine other than the fact that this query is going to do a full table scan and will not be an optimized query. However if you're storing in some other format, you may get different results.

Using Lexical Filtering of Azure Table on range of RowKey values

Problem: no results are returned.
I'm using the following code to get a range of objects from a partition with only 100 or so rows:
var rangeQuery = new TableQuery<StorageEntity>().Where(
TableQuery.CombineFilters(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, from)
),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, to)
)
);
var results = table.ExecuteQuery(rangeQuery);
foreach (StorageEntity entity in results)
{
storageEntities.Add(entity);
}
NOTE: it doesn't seem to matter how I combine the 3 terms, no results are returned. An example of one that I am expecting is this (partitionKey, rowKey):
"10005678", "PL7NR_201503170900"
The ranged filter code generates this expression:
((PartitionKey eq '10005678') and (RowKey ge 'PL7NR_201503150000'))
and (RowKey lt 'PL7NR_201504082359')
But I have also tried this (which is my preferred approach for performance reasons, i.e. partition scan):
(PartitionKey eq '10005678') and ((RowKey ge 'PL7NR_201503150000') and
(RowKey lt 'PL7NR_201504082359'))
My understanding is that the Table storage performs a lexical search and that these row keys should therefore encompass a range that includes a row with the following keys:
"10005678", "PL7NR_201503170900"
Is there something fundamentally wrong with my understanding?
Thanks for looking at this.
UPDATE: question updated thanks to Gaurav's answer. The code above implicitly handles continuation tokens (i.e. the foreach loop) and there are only 100 or so items in the partition, so I do not see the continuation tokens as being an issue.
I have tried removing the underscores ('_') from the key and even tried moving the prefix from the rowKey and adding it as a suffix to the partitionKey.
NOTE: This is all running on my local machine using storage emulation.
From Query Timeout and Pagination:
A query against the Table service may return a maximum of 1,000 items
at one time and may execute for a maximum of five seconds. If the
result set contains more than 1,000 items, if the query did not
complete within five seconds, or if the query crosses the partition
boundary, the response includes headers which provide the developer
with continuation tokens to use in order to resume the query at the
next item in the result set. Continuation token headers may be
returned for a Query Tables operation or a Query Entities operation.
Please check if you're getting back Continuation Token in response.
Now coming on to your filter expressions:
((PartitionKey eq '10005678') and (RowKey ge 'PL7NR_201503150000'))
and (RowKey lt 'PL7NR_201504082359')
This one is definitely doing a Full Table Scan because (RowKey lt 'PL7NR_201504082359') is a clause in itself. For executing of this particular piece, it basically starts from top of the table and find out entities where RowKey < 'PL7NR_201504082359' without taking PartitionKey into consideration.
(PartitionKey eq '10005678') and ((RowKey ge 'PL7NR_201503150000') and
(RowKey lt 'PL7NR_201504082359'))
This one is doing a Partition Scan and you may not get result back if you have too much data in the specified partition or the query takes more than 5 seconds to execute as mentioned above.
So, check if your query is returning any continuation tokens and make use of them to get the next set of entities if no entities are returned.
A few resources that you may find useful:
How to get most out of Windows Azure Tables: http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
Azure Storage Table Design Guide: Designing Scalable and Performant Tables: http://azure.microsoft.com/en-us/documentation/articles/storage-table-design-guide/

Resources