I have a Logic App that should store data in Azure Table. Everything worked fine until I realized that one of my properties that should be stored as DateTime is stored as String.
The problem is that some other application is doing queries periodically on data in the table and it expects to find DateTimes there:
var query = new TableQuery<UserEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterConditionForDate(
nameof(UserEntity.AccessEndTime),
QueryComparisons.GreaterThanOrEqual,
DateTime.SpecifyKind(queriedDate, DateTimeKind.Utc)),
TableOperators.And,
TableQuery.GenerateFilterConditionForDate(
nameof(UserEntity.AccessEndTime),
QueryComparisons.LessThan,
DateTime.SpecifyKind(queriedDate.AddDays(1), DateTimeKind.Utc))));
Basically, my C# app is looking for users who have their AccessEndTime property value set to some specific day.
Unfortunately, since the Logic App writes the value as a string, my query does not return any data.
Here's a part of my Logic App:
First I create an object with the proper data as JSON and then I use Insert or Replace Entity block, which used Body of that JSON as an entity to be put in the table. As you can see, AccessEndTime has a type: string. I tried using type: datetime, but it just fails with an error (no such type).
I guess I could handle it on the client-side, but then my UserEntity will have to have AccessEndTime as a String and it just doesn't feel right.
What am I missing?
//EDIT
I also found this. I tried to put my data like this:
So, I added explicitly the type of my property. Unfortunately, the result is still the same.
Check out this SO question's response around the same: Cannot query a DateTime column in Table Storage
It looks like you could have used formatDateTime() as per documentation, but this will not work as described below:
According to some test, the value is still in "String" type but not "DateTime" type. This document shows us the method formatDateTime() response a value in string.
So when we insert the value from method formatDateTime(), it will insert a string into the storage table. It seems there is a bug in display of azure portal, it shows the type is "DateTime". But if we open the table storage in "Azure Storage Explorer" but not on Azure portal, we can find the TimeOfCreation of new inserted record is in "String" type.
For this requirement, it's difficult to get a "DateTime" type value in logic app and insert it into table storage. We can just insert a string. But we can edit the type after insert the new record to table storage. We can do it on Azure portal or in "Azure Storage Explorer". If do it on Azure portal, just click "edit" the record and click "Update" button without do anything(because the type already show as "DateTime"). If do it in "Azure Storage Explorer", just change the type from "String" to "DateTime" and click "Update". After that, we can query the records by "TimeOfCreation" >= Last 365 days success.
The bad thing here is, we can just do it manually on each inserted record. We can't solve this problem in logic app or batch update the type(on portal or in explorer). If you want to batch update the type, you can query all of the new inserted records by this api (use $filter to filter timestamp). And then get each record's PartitionKey and RowKey, and loop them. Use this api to update the column TimeOfCreation type.
Related
I have an ADF pipeline which is iterating over a set of files, performing various operations and I have an Azure CosmosDB (SQL API) instance where I would like to insert the name of file and a timestamp, mainly to keep track on which files have been already processed and which not, but in the future I might want to add some other bits of data related to each file.
What I have is my CosmosDB
And currently I am trying to utilice the Copy Data Activity for the insert part.
One problem that I have is that this particular activity expects source while at this point I have only the filename. In theory it was an option to use the Blob Storage from where I read the file at the beginning, but since the Blob Storage is set to store binary files I got the following error if I try to use it as source
Because of that I created a dummy CosmosDB Linked service, but I have several issues with this approach:
Generally the idea for dummy source is not very appealing to me
I haven't find a lot of information on the topic but it seems that if I want to use something in the Sink I need to SELECT from the source
Even though I have selected a value for the id the item is not saved with the selected value from the Source query, but as you can see from the first screenshot I got a GUID and only the name is as I want it.
So my questions are two. I just learn ADF but this approach doesn't look like the proper way to insert item into CosmosDB from activity, so a better/more common approach would be appreciated. If there is not better proposal, how can I at least apply my own value for the id column? If I create the item in the CosmosDB GUI and save it from there, as you can see I am able to use the filename as id which for now seems like a good idea to me, but I wasn't able to add custom value (string or int) when I was trying through the activity, so how can I achieve this?
This is how my Sink looks like
I have created Azure Search resource, and also SQL Database.
I'm trying to use "Add Azure Search" option in Azure Portal.
It splited to 2 steps.
Data source creation (done)
Indexer creation
When i'm trying to create indexer, it says
Import configuration failed, error creating Index
Error creating Index: "The request is invalid."
What does it mean? There is no any details.
My Table Schema looks like this:
Did you change any of the types in the index from the defaults? Here is a mapping of what SQL types map to Azure Cognitive Search index field types: https://learn.microsoft.com/en-us/azure/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers#mapping-between-sql-and-azure-cognitive-search-data-types From my link, nvarchar maps to Edm.String or Collection(Edm.String). In your screenshot above, it looks like you've changed several field types (to Edm.DateTimeOffset and Edm.Int64, for example). That may be causing the error when it tries to create the index.
Or, it may be that you specified a ‘suggester name’ and ‘search mode’, but none of the index fields have ‘Suggester’ checked (hard to tell if the screenshot includes all fields or not). If you need a suggester, you should mark at least one field to use it. If you don’t need it, don't fill in those fields; otherwise the index creation will fail.
I'm trying to add entries into my cosmosdb using Azure Data Factory - However i am not able to choose the right collection as Azure Data Factory can only see the top level of the database.
Is there any funny syntax for choosing which collection to pick from Cosmos DB SQL API? - i've tried doing, entities[0] and entities['tasks'] but none of them seem to work
The new entries are inserted as we see in the red box, how do i get the entries into the entries collection?
Update:
Original Answer:
If the requirement you mentioned in the comments is what you need, then it is possible. For example, to put JSON data into an existing ‘tasks’ item, you only need to use the upsert method, and the source json data has the same id as the ‘tasks’ item.
This is the offcial doc:
https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db#azure-cosmos-db-sql-api-as-sink
The random letters and numbers in your red box appear because you did not specify the document id.
Have a look of this:
By the way, if the tasks have partitional key, then you also need to specify.
I'm using azure log analytics workspace for crating azure monitoring workbooks.
Here is one parameter I have to create which will present All json key from LogEntry.
e.g. screenshot
LogEntry has details of #metadata, #timestamp and other keys.
Which operator I should use for getting all Json key under LogEntry.
I try below steps but there is no such a sub module it's showing.
ContainerLog | project LogEntry | evaluate bag_unpack(LogEntry)
I was reffering link but this parameters is not working in log analytics.
Although not mentioned in the blog you referenced, the bag_unpack plugin can be used in Azure Application Insights Workbooks as well.
However, I'm afraid it won't work on the table in question, the reason being, bag_unpack(column) expects the column argument to be a reference to a dynamic column, whereas the LogEntry column in ContainerLog table is of type string.
References:
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin
https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/containerlog
I have a query like;
SELECT
*
INTO [documentdb]
FROM
[iothub]
TIMESTAMP BY eventenqueuedutctime
I need to use * because data is dynamic and dont have specific schema. Problem is Iothub system information data is written to documentdb in this query. Is there any way to exclude Iothub system information data?
Thanks.
This is not possible currently but this will be possible in Job Compatibility Level 1.2 in near future. For now, one workaround is that you could create a post create trigger in Cosmos DB to remove this property from the document.
To answer your question, Azure stream analytics service doesn't have an in-built support for excluding columns from dynamic data (iothub information). But, we can achieve this by using UDF. Here is more info on UDF.
UDF can help us in deleting the column from input data and returning us the updated json.
There are two steps basically to achieve this:
Create a JavaScript UDF.
Go to functions from left hand side navigation (below inputs).
Click on Add --> JavaScript UDF.
Give a function alias = removeiothubinfo
keep output type - any.
copy paste following code into function definition.
function main(input) {
delete input['IoTHub'];
return input;
}
Click on Save
Update query
Go to query mode and copy paste the following query :
WITH NewInput AS
(
SELECT
udf.removeiothubinfo(iothub) AS UpdatedJson
FROM
[iothub]
)
SELECT
UpdatedJson.*
INTO
[documentdb]
FROM
NewInput
Click on Save
I suggest you to test your query before running the job by uploading a sample file containing similar structure for json.
Edited
Also, even in job compatibility level 1.2 there has been no additional functionality to achieve this. Check this out for more info.
As #chetangm said in his answer, no such filtering mechanism is supported in ASA so far. Yes, you could use create trigger in Cosmos db, however it need to be triggered in sdk code or REST API. It won't be triggered automatically.
I provide you with another workaround that using Azure Function Cosmos DB Triggered. It could be executed when data is added to or changed in Azure Cosmos DB. You just need to remove the fields you don't want in the function code.