I've been working on migrating all of the work items from one Azure DevOps (Services) project to another project in the same Organization.
I used the nkdAgility azure-devops-migration-tools to successfully copy the majority of existing work items across, but it did not grab our Shared Queries.
I played around with the Azure Rest API in powershell to list the queries. I also looked at the AZ CLI suite to see if there was a way to list the queries. I was able to find a couple at the root level, but it was not the entire list of Shared Queries.
Is this possible to accomplish through either of the above methods?
My Google-fu was strong today! Here's a link to a script that does almost exactly what I want.
Migrate Azure DevOps work items queries to a new organization
The only difference is that I am staying within my Organization, so making mods accordingly. Also, the Azure Rest API has probably evolved a bit since the original script was written, so I am updating the requests to handle that.
Thanks Josh Kewley!
Related
I'm currently busy with an internship. In this internship I need to create a program which automatically creates "snapshots" of the current state of Azure Resources (And sometimes their dependencies) which need to be deployed to another environment. e.g. Acceptance -> Production. These snapshots must then be deployed to the new environment at a later date which has been coordinated with the client.
A solution can consists out of >100 Azure resources, ranging from API Managers, to LogicApps, CosmosDB's, etc. When a customer accepts or says "ok" to a few resources (= a part of the total solution) a snapshot needs to be made of that resource, in the specific state when the client said OK. That means that I also have to create a snapshot of the dependencies of that specific resource (LogicApp can depend on a CosmosDB, Keyvault etc).
And I can't just take a reference to the resource in the Acceptance environment, I need to bring that dependency over to production as well, seeing as it might be possible that another developer will continue working on said dependency which might break things.
I am bit of at a loss as to which direction to take here. I don't have a lot of experience with ARM (Templates) and I have been making several prototypes for a month now.
I have first tried to generate my own ARM (and Bicep) files through gathering information from the Azure Rest API, but I soon discovered this is not viable because I cannot extract all of the information from that API to create said ARM file.
I then looked into modifying the generated ARM files from Azure itself. Whilst this is an option, it contains a lot of information which I do not need or want to transfer over to another environment. It is also very hard to determine which parts of the generated ARM file must be deleted, updated, copied or left alone. And then I still need to recursively get the ARM templates of the dependencies and go through those in an automated way as well.
Is modifying existing ARM templates the best route to go here? Or does a similar product already exist which might help achieve my goal?
Thank you!!
In this case, I would not go with the approach to modify exported ARM templates but I would go with approach of Infrastructure as Code i.e., I would created ARM templates as granular as possible i.e., may be one template per resource at the least and store that infrastructure code in a source repository and if required version it to use it in different environments. The reason for recommending one template per resource is to take care of the dependencies in a complex environment. I know this might look like a bigger activity for the first-time implementation but once the templates are integrated into any continuous integration and continuous deployment (CI/CD) tool like Azure DevOps then all of it can be automated with the help of release pipelines for fast and reliable application and infrastructure updates. For more information in this regard, please refer this and this Azure documents.
I have the following requirements, where I consider using Azure LogicApp:
Files placed in Azure Blob Storage must be migrated into a custom place (it can be different from case to case)
Amount of files is something about 1 000 000
When the process is over, we should have a report saying how many records (files) failed
If the process stopped somewhere in the middle, the next run must take only files that have not been migrated
The process must be fast as it can be and files must be migrated within N hours
But what makes me worried is the fact that I cannot find any examples or articles (including official Azure Documentation) where the same thing is achieved by Azure LogicApp.
I have some ideas about my requirements and Azure Logic App:
I think that I must use pagination for dealing with this amount of files because Azure Logic App will not be able to read millions of file names - https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exceed-default-page-size-with-pagination
I can add a record into Azure Table Storage to track failed migrations (something like creating a record to say that the process started and updating it when the file is moved to the destination)
I have no ideas how I can restart the Azure Logic App without using a custom tracking mechanism (for instance it can be the same Azure Table Storage instance)
And the question about splitting the work across several units is still open
Do you think that Azure Logic App is the right choice for my needs or I should consider something else? If Azure LogicApp can work for me, could you please share your thoughts and ideas on how I can achieve the given requirements?
I don't think logic app is a good solution for you to implement the requirement because the amount of files is about 1000000, that's too much. For this requirement, I suggest you to use Azure Data Factory.
To migrate data in azure blob according data factory, you can refer to this document
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
I have a situation where my organization uses a specific project in our VSTS:
{organization-1}.visualstudio.com/{Project.Client}
Some clients have their own as well for the same project:
{organization-2}.visualstudio.com/{Project.Client}
Is it possible to keep these two projects in sync (work-items and code)? Assume that they are both using TFS.
Creating a service EndPoint from {organization-2} to {organization-1} is possible, but doesn't seem to provide much AFAICT since nothing new shows up in the Notifications menu, nor on any of the Work boards relating to the new Endpoint
I've tried creating a Service Hook from {organization-1} to an Azure Service Bus, but there doesn't seem to be any way to consume it from {organization-2} that I can see.
How can I get these two organizations to usefully talk to each other?
Work Item change notifications, Code Check Ins, etc.
No, the two accounts can’t be synchronized but you can export/import WIT and code from one account to the other.
For export/import WIT, you can use MS excel or other extensions such as VSTS sync migration.
One query about window azure that is there any way to get programmatic list of Regions which we are choosing while hosting our app?
CLI
az account list-locations
Powershell
Get-AzLocation
The only way that I can think of is to use the REST Management API.
You have to query the Management API, and call the List Locations method. You will however need a management certificate and a subscription id to do so. You can have the result cached for some time, if you are doing it very ofetn. I think that 1 hour is fair enough time to keep the locations cached. I don't expect that a change would happen that often, but it's good to refresh your list of locations from time to time.
Here is one example you can use. And here is a NuGet package that wraps everything around, so you don't need to construct your REST calls manually.