I have a cosmos DB, a queue, a function, a MySQL DB. The cosmos DB gets an entry, it writes into the queue. When it comes to queue automatically triggers a function which writes into DB. While fetching the resources from azure using resource manager, I got all the resources under the subscription, but could not find the relationship with these. Can I get the relationship between these?
This GitHub issue talks exactly about what you are looking for. Please have a look at the conversation. In short, we have feature called as resource links which we can use to mark a relationship between resources and then query a resource for its linked resources however currently we can discover the related resources directly through a resource's properties or by tagging resources to note connections.
Hope this helps!! Cheers!!
Related
I wonder if it's a good idea to use Azure Cosmos DB "container" to manage an entity's status? For example, an employee's reimbursement can have different statuses like submitted, approved, paid, etc. Do you see any problem creating a separate container for "Submitted", "Approved", etc? They would contain the similar reimbursement object but slightly different data points due to their status. For example, Submitted container could have Manager's name as the approver, Paid container could have payment method.
In other words, it's like a persistent queue. It will be moved out of the container and into the next in the workflow process.
Are there any concerns with this approach? Does Azure pricing model "provisioned throughput" charge by the container? Meaning the more container you have, the more expensive it gets? Or is it on the database level so that I can have as many containers I want, it's only charging the queries?
Sorry for the newbie questions, learning about Cosmos. Thanks for any advice!
It's a two part question :).
First part (single container v/s multiple container) is basically an "opionion-based" question. I would have opted for a single container approach as it would have given me just one place to look for the current status of an item. But, that's just my opinion :).
Regarding your question about pricing model, Azure Cosmos DB offers you both.
You can provision throughput at the container level as well as on the database level. When you provision throughput at the database level, all containers (up to 25) in that database will share the throughput of the database.
You can even mix and match the approaches as well i.e. you can have throughput provisioned at the database level and then have some containers share the throughput of the database while some containers have dedicated throughput.
Please note that once throughput type (fixed, shared or auto-scale) is configured at the database/container level, it can't be changed. You will have to delete and create new resource with changed throughput type.
You can learn more about throughput here: https://learn.microsoft.com/en-us/azure/cosmos-db/set-throughput.
I want to have a control in Azure regarding new and deleted items
I need a query to know "who" and "when" a resource is created or deleted in Azure
Is this possible? How can I do this query?
I need a query to know "who" and "when" a resource is created or
deleted in Azure
Is this possible? How can I do this query?
Whenever a resource is created or deleted, information about that operation is stored in Azure Activity Logs. You should be able to find the information by querying that.
Another alternative would be to make use of Azure Event Grid and subscribe to Subscription Events. You can subscribe to Microsoft.Resources.ResourceWriteSuccess (for creation/updation of resources) and Microsoft.Resources.ResourceDeleteSuccess (for resource deletion) events and take action on these events in near real time.
Within the Azure Portal, you can view these types of events from the past 90 days in the Activity Log blade.
For access to events occurring more than 90 days in the past, you need to pre-emptively set up log archival as detailed in the Export the Azure Activity Log article.
If you are planning to use the export Activity Log feature, please make sure you use the new diagnostic setting feature on Azure subscription to export Activity Logs. This feature offers multiple improvements over the old features such as Logprofiles or the Activity Log solution (Log Analytics).
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-collect
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings-template
I am working on a script to collect all resources and put them into one resource group - however, the command in Powershell used for moving resources works on a resource group by resource group basis. This means that if there are dependent/related resources in different resource groups, the command fails.
The alternative, then, is to group resources by their relation to one another and move them using some other method (probably manually through the portal/REST API.)
How can one then go through a list of resources in a subscription and group them by their dependency/relations?
IMHO one of the reason to introduce resource group is to try avoid such situation / scenario i.e., as per this Azure document, a resource group is a container that holds related resources for an Azure solution.
I believe a straight forward way or feature (to get list of dependent / related resources that are sitting in different resource groups in a subscription and group them by their dependency/relations) is currently not available or not supported. But I see this feature request raised in UserVoice / feedback forum as a related one so if interested, you may upvote it or create a new feedback in there. In general, responsible Azure product / feature team would triage / start checking feasibility and prioritizing a received feedback based on various factors like number of votes a feedback receives, feasibility, open prioritized backlog items, etc.
On the other hand, as a workaround to accomplish this requirement for now, I suggest to try coming up with an automation (to get list of dependent / related resources that are sitting in different resource groups in a subscription and group them by their dependency/relations) by leveraging Azure Platform logs (and fetching those logs at a particular time window of operation) as those logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on.
Other related references:
Move resources to a new resource group or subscription
Move operation support for resources
Troubleshoot moving Azure resources to new resource group or subscription
I'm fairly new to OCI.
What is a similar or close concept in Oracle Cloud Infrastructure similar to Microsoft Azure "Resource Groups"?
In Azure when a resource group is deleted all resources in that group will also be deleted along with it. But "Compartments" in Oracle cloud infrastructure is not the exact same concept, because in order to delete a compartment, each resource should be deleted first and then the compartment should be deleted. Is it possible to delete a compartment along with its resources without deleting resources one by one?
I see two options here:
You may use instance pools. Deleting an instance pool will delete all
of its resources, but that only includes instances, boot volumes and
block volumes. Networking and other resources wouldn't be impacted. Instance Pool only works for compute instances having the same configuration, so this is not a generic solution for your question.
Resource Manager can be used to create a stack with all the resources
you need. When you destroy a stack by launching a destroy job, it will delete all the resources that are part of the stack. But resource manager requires you to
create Terraform config files, which can be applied through the OCI
Console. This also means that you cannot create any components of the stack
manually using the GUI, you have to keep using the Resource Manager even if you would like to update any resources of the stack.
Compartments in Oracle Cloud Infrastructure is similar to Azure Resource groups.
As per the doc, It is not possible to delete all the resources in compartment at a time similar to Azure.
I know Microsoft Azure API has a way to pull a data slice using a GET request. The api is here
https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.DataFactory/datafactories/<DataFactoryName>/tables/<TableName>/sliceruns?start=<StartDateTime>&api-version=<Api-Version>
Problem is I have to manually specify the data factory, data set, and start time, what if I want to pull all logs for a start time for a particular resource group. I know I can do it if I list all data factories and sets and then loop through them. But then I'm calling an http request inside a nested for loop which seems like a really bad/expensive idea. I'm working on a logging web app using Kibana that's why I need all logs.
Unfortunately this is not supported. This is due to the way that API routes are designed for Azure Resource Manager (ARM) services, which ADF is one of. The solution you mentioned, while not ideal, is the best available one.
A bit more: API routes for top-level resources (e.g. a data factory) will always contain a subscription ID and resource group name. Similarly, routes for child resources/APIs (e.g. datasets, slices, etc.) must contain the subscription ID, resource group name and the top-level resource name.
If there was such an API that let you list slices from any data factories in a resource group, this would have to be executed as a "fan-out" query. Since the data factories in the resource group can be in any region, ARM would have to send requests to each of the Data Factory resource providers (RPs) that has a data factory in the resource group, then aggregate and return the results to the caller; this is not supported by any of the Azure services.