I would like to look at the data collected recently.
I've written some query statements, but they were shown in the order of the first data collected.
I do not know how to write a query to see the data in the order in which it was recently collected.
Please let me know if you have any related tips.
You can click on Timestamp column and view the data in ascending or descending order as there is no support for Order by queries in Azure Storage Explorer.
I do not know how to write a query to see the data in the order in which it was recently collected.
Based on the official document, the order by query is not support by Azure Table Service currently. The query result is default order by PartitionKey and RowKey.
Related
I am trying to get aggregate data sent to different table storage outputs based on a column name in select query. I am not sure if this is possible with stream analytics.
I've looked up the stream analytics docs and different forums, so far haven't found any leads. I am looking for something like
Select tableName,count(distinct records)
into tableName
from inputStream
I hope this makes it clear what I'm trying to achieve, I am trying to insert aggregates data into table storage (defined as outputs). I want to grab the output stream/tablestorage name from a select Query. Any idea how that could be done?
I am trying to get aggregate data sent to different table storage
outputs based on a column name in select query.
If i don't misunderstand your requirement,you want to do a case...when... or if...else... structure in the ASA sql so that you could send data into different table output based on some conditions. If so,i'm afraid that it could not be implemented so far.Every destination in ASA has to be specific,dynamic output is not supported in ASA.
However,as a workaround,you could use Azure Function as output.You could pass the columns into Azure Function,then do the switches with code in the Azure Function to save data into different table storage destinations. More details,please refer to this official doc:https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-with-azure-functions
Is there a way to Bind share point list to Sql Db tables to fetch the updates dynamically. Example: I have a share point list of 2 columns and, I have azure sql Db table with 2 columns. I would like to bind them together so when an update is happened in DB column, respective share point list column data will be updated.
I have tried write a sprint book job to do this but, it is lot of code to maintain. Also we need to manage the real time sync on our own.
I am expecting there might be some out of the box connecter in Microsoft flow, or azure logic app or something automation which will help me automate this.
I would suggest you check BCS so your db data could sync with SharePoint external list.
https://learn.microsoft.com/en-us/sharepoint/make-external-list
Another thread with demo.
https://www.c-sharpcorner.com/article/integrate-azure-sql-db-with-sharepoint-online-as-an-external-list-using-business/
There is SQL Server connector, suppose this is what you want. You could use the trigger When an item is created or When an item is modified to get the sql updates details.
The output would be like the below pic shows.
Further more information you could refer to this doc:SQL Server.Note: there are some known limitation, like When invoking triggers,
A ROWVERSION column is required for OnUpdatedItems
An IDENTITY column is required for OnNewItems
After the trigger, you could use the table details to update sharepoint list.
Hope this could help you.
I have recently started using Azure Cosmos DB in our project. For the reporting purpose, we need to get all the Partition Keys in the collection. I could not find any suitable API to achieve it.
UPDATE: According to Brian in the comments below, DISTINCT is now supported. Try something like:
SELECT DISTINCT c.partitionKey FROM c
Prior answer: Idea that could work but for one thing...
The only way to get the actual partition key values is to do a unique aggregate on that field.
You can directly hit the REST endpoint at https://{your endpoint domain}.documents.azure.com/dbs/{your collection's uri fragment}/pkranges to pull back the minInclusive and maxExclusive ranges for each partition but those are hash space ranges and I don't know how to convert those into partition key values nor do a fanout using the actual minInclusive hash.
Also, there is a slim possibility that the pkranges can change between the time you retrieve them and the time you go to do something with them.
I am trying to integrate Azure Stream Analytics with DocumentDB and use it as a output sink. Problem is, that there are no documents created in DocDB when the processing job is running. I tried to test my query and I have even tried to mirror the output to storage account. There is json file being created in the storage containing all the values, but DocDB stays empty.
Here is my query:
WITH Res1 AS ( SELECT id,
concat(
cast( datepart(yyyy,timestamp) as nvarchar(max)),
'-',
cast( datepart(mm,timestamp) as nvarchar(max)),
'-',
cast( datepart(dd,timestamp) as nvarchar(max))) date, temp, humidity, distance, timestamp
FROM
iothub Timestamp By timestamp)
Select * into docdboutput FROM Res1
Select * into test FROM Res1
I did set the documentDB output correctly to existing collection. I also tried to provide and not to provide document id parameter and neither of the options was working. I have used date field as a partition key when creating DocDB database and collection.
I did try also manual document upload. I have copied line from the created json file in storage account. I created separate json file containing this one record and uploaded it manually to DocumentDB collection via portal. It succeeded. Here is example of one line that was output to storage file:
{"id":"8ace6228-a2e1-434d-a5f3-c2c2f15da309","date":"2017-2-10","temp":21.0,"humidity":20.0,"distance":0,"timestamp":"2017-02-10T20:47:54.3716407Z"}
Please can anyone advice me, if there is some problem with my query, or navigate me how can I investigate and diagnose further.
Are you by any chance using a collection which has <=10K RUs, and has a partition key defined in DocDb (aka Single Partition Collection) ?
There is an ongoing defect that is blocking output to Single partitioned collections. This should be fixed by end of next week. Your workarounds at this point are try using a different collection --
a) with >10K RUs (with partition key defined in DocDB)
b) with <=10K RUs (with no partition key defined in DocDB/ASA)
Hope that helps!
I came across with weird behavior of Azure table Storage query. I used following code to get the list of entities from Azure Table Storage
query = context.CreateQuery (DomainData.Employee.TABLE_NAME) .Where(strPredicate).Select(selectQuery));
where is context TableServiceContext, I was trying to pull Employee entity from Azure table storage, My requirement is dynamically construct predicate and Projections.
So strPredicate is a string, where it contains dynamically constructed predicate.
selectQuery is projection string, where it is constructed dynamically based on User Selected Properties.
When the users selects all the properties of Employee Object, here Employee object has over 200 properties. system constructed dynamic projection string based on all properties and System takes 45 minutes to retrieve 60000 records from Azure table storage.
Whereas when i enter directly object in select projection, i.e looks like below
query = (context.CreateQuery<DomainData.Employee> (DomainData.Employee.TABLE_NAME) .Where(strPredicate)
then query takes only 5 minutes to retrieve 60000 records from Azure table storage. Why is this peculiar behavior both the query are same , one with project of columns/properties other is without any projection, but Azure table storage provides same number of entity with same column property and same size of each entity why is it Azure table storage is taking too much of time in the first query why is it faster in second query. Please let me know.
The standard advice when dealing with perceived anomalies with Windows Azure Storage is to use Fiddler to identify the actual storage operation invoked. This will quickly allow you to see what the actual differences are with the two operations.