I have a worker role running that creates tables in table storage, and I would like to be able to group these tables into categories like you would under a folder.
I cannot see any way to do this with the table classes in .Net, but when I look in my table storage 'Tables', I see a 'Metrics Table' entry which looks like a 'folder' and expands to show multiple metrics tables below it.
How can I create/add one of these myself programmatically?
Any ideas gratefully received?
I'm afraid this is not possible. Metric tables are handled differently by Visual Studio. They are not even returned when using Query Tables storage REST API (you can only use them directly by name). Tools like Azure Storage Explorer do not show them at all.
Back to your question. Best practice is to use common prefix for tables in same 'category'.
ex. WAD* for all azure diagnostics tables, NLog*for nlog tables.
Simple answer is that you can't. Table Storage Service contains tables and then each table contains entities. The functionality about Metrics Table you're talking about is a UI feature where the UI combines all these tables together.
Related
While creating custom log search alerts in log analytics workspace, I want to store some data and query it in alert query. Basically, it is a mapping like ABC -> DEF, GHI -> JKL. These mappings can be changed manually.
I am looking a solution like creating a table or function in workspace, or reading data from blob in the query. I do not want to create the table or function in the alert query, just read from it. If there are other solutions, please suggest them too.
Have you tried inserting custom data in Log Analytics via the REST API? This will solve your problem--and it's what we do using Runbooks. Works great.
Log Analytics Data Collector API
I realize this is an old thread, but for anyone else looking to do this, see:
Implementing Lookups in Azure Sentinel
Azure Sentinel provides four methods to reference, import, and use lookup information. The methods are:
The built-in Watchlists feature, which enables uploading CSV files as lookup tables.
The externaldata KQL function, which enables referencing an Azure Storage file as a lookup table.
Custom tables, imported using a custom connector.
A KQL function utilizing the datatable operator, which can be updated interactively or using PowerShell.
I have an Azure SQL database with many tables that I want to update frequently with any change made, be it an update or an insert, using Azure Data Factory v2.
There is a section in the documentation that explains how to do this.
However, the example is about two tables, and for each table a TYPE needs to be defined, and for each table a Stored Procedure is built.
I don't know how to generalize this for a large number of tables.
Any suggestion would be welcome.
You can follow my answer https://stackoverflow.com/a/69896947/7392069 but I don't know how to generalise creation of table types and stored procedures, but at least the metadata table of the metadata driven copy task provides a lot of comfort to achieve what you need.
I'm trying to find the best solution for storing dynamic spatial data. I wonder if any of Microsoft's Azure solutions could work. Azure Table Storage would let me create a lot of custom and dynamic structures stored on fast SSD disks.
Because of data's dynamic nature, common indexing seems useless. I would also like to create a lot of table-like structures so the whole architecture cannot be static. Using Azure Table Storage I would dynamically create a table based on country, city, etc sorted by latitude or longitude.
I would appreciate any clue.
Azure Table Storage has mostly been replaced by Azure Cosmos DB.
At the time of writing the Table Storage page even says:
The content in this article applies to the original basic Azure Table storage. However, there is now a premium offering for Azure Table storage in public preview that offers throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, please check out Azure Cosmos DB: Table API.
You can use Cosmos DB via the Table API, but you'll probably find the Document DB API to be more powerful.
Documents are "schema-free". You can just throw your documents in to a collection, and then you can query against them.
You can create documents which have geo-spatial properties which are indexed automatically.
Then you can perform geo-spatial queries against those properties.
For example you might give each of your documents a point, and then create a query to select all documents that are inside of a polygon.
Or maybe you want to find out how far away each document is from a given point.
I have used Data Factory wizard to copy Azure tables from one storage account to another storage account. Tables are huge with millions of entities and hundreds of partitions.
Now i want to make sure the copied tables are correct. Is there anyway i can compare integrity of tables between 2 storage accounts ? Does azure has any feature to do this ?
There's no built-in "compare tables" feature. This will be up to you to figure out how to do. I'm guessing you'll need to go partition-by-partition, comparing content. Assuming the content is the same, the order should be the same as well, but it would be an entity-by-entity comparison.
Maybe also consider, on the "write" end of the process, ensuring that you're doing one-for-one item copies?
I have an application that looks up data for a page. The data is looked up by primary key and row key in table storage.
I am considering SQL Azure storage. Is there some advantage in my going to this kind of storage being that the look up will always be very direct. Note that I do NOT need any reporting. ALL I want is single row look up
I am considering SQL Azure storage. Is there some advantage in my going to this kind of storage being that the look up will always be very direct. Note that I do NOT need any reporting. ALL I want is single row look up
Assuming that your requirements are fully stated as will only ever need single row access, and assuming that you only want to know about advantages and not disadvantages, then the only advantages I can think of are that SQL azure offers:
time-based subscription pricing instead of pricing per transaction
options for backup (in CTP)
options for replication/synchronisation
more client library options (e.g. Entity Framework, Linq2SQL, etc)
more data types supported
more options for moving your app outside of Azure if you ever want to
Use Table Storage if you don't need relational database functionality.