What are the different options to export a workspace let's say from Development into another workspace in TEST, UAT OR Production Synapse Analytics. Most of the options I have come across has been around copying code and pasting it in the intended environment. But, I want it to be automated so there's less moving parts and seamlessly move pipelines, linked services & datasets into different environments and less prone to run into errors.
I suppose that you could follow this doc Source control in Synapse Studio to establish the integration. And also you need to check your role assignments in access control for the workspace.
Related
I'm currently busy with an internship. In this internship I need to create a program which automatically creates "snapshots" of the current state of Azure Resources (And sometimes their dependencies) which need to be deployed to another environment. e.g. Acceptance -> Production. These snapshots must then be deployed to the new environment at a later date which has been coordinated with the client.
A solution can consists out of >100 Azure resources, ranging from API Managers, to LogicApps, CosmosDB's, etc. When a customer accepts or says "ok" to a few resources (= a part of the total solution) a snapshot needs to be made of that resource, in the specific state when the client said OK. That means that I also have to create a snapshot of the dependencies of that specific resource (LogicApp can depend on a CosmosDB, Keyvault etc).
And I can't just take a reference to the resource in the Acceptance environment, I need to bring that dependency over to production as well, seeing as it might be possible that another developer will continue working on said dependency which might break things.
I am bit of at a loss as to which direction to take here. I don't have a lot of experience with ARM (Templates) and I have been making several prototypes for a month now.
I have first tried to generate my own ARM (and Bicep) files through gathering information from the Azure Rest API, but I soon discovered this is not viable because I cannot extract all of the information from that API to create said ARM file.
I then looked into modifying the generated ARM files from Azure itself. Whilst this is an option, it contains a lot of information which I do not need or want to transfer over to another environment. It is also very hard to determine which parts of the generated ARM file must be deleted, updated, copied or left alone. And then I still need to recursively get the ARM templates of the dependencies and go through those in an automated way as well.
Is modifying existing ARM templates the best route to go here? Or does a similar product already exist which might help achieve my goal?
Thank you!!
In this case, I would not go with the approach to modify exported ARM templates but I would go with approach of Infrastructure as Code i.e., I would created ARM templates as granular as possible i.e., may be one template per resource at the least and store that infrastructure code in a source repository and if required version it to use it in different environments. The reason for recommending one template per resource is to take care of the dependencies in a complex environment. I know this might look like a bigger activity for the first-time implementation but once the templates are integrated into any continuous integration and continuous deployment (CI/CD) tool like Azure DevOps then all of it can be automated with the help of release pipelines for fast and reliable application and infrastructure updates. For more information in this regard, please refer this and this Azure documents.
I have a query to setup multiple environments at a time so that we can discreetly test multiple projects at once. Ideally we should be able to spin these environments up and down as necessary.
We have microservice based architecture and are mostly using azure PAAS services in our infrastructure.
Currently i have tried to automate our infrastructure through terraform its almost done but next step is deployment of code as services are not containerized so tried using azure pipelines but its a huge task, can i get any better idea for this that how we could do this.
Should look at leveraging Azure Pipeline Templates Once this is defined then can reuse it everywhere. For instance with terraform created a template for doing the plan and apply that just needs to be fed in the directory the terraform is located in. This saved time across all projects as we just need to reference our template and the rest was taken care of.
In terms of your other question with the ability to spin up and spin down this can be easily done if the application is architected with that in mind. Keep in mind for deployment certain things where names must be unique: storage account, app service and things that are potentially shared: i.e. network.
The other piece to consider is how to ensure these ad hoc environments are actually being spun down. Would recommend something like a tagging strategy or process that cleans up resources that haven't been deployed in x amount of days.
Currently, I'm storing all key/value pairs in Application Settings, but I'm not happy with this approach. What is the recommended way to store settings for dev, test, stage, and prod? I need to make sure that prod settings are not visible to developers. Is there a way to create 4 different JSON files and define access permissions on them? Or do I need to create 4 different Function apps (or subscriptions)?
Azure App Configuration is a relatively new service that sounds like it could help in terms of managing the config values centrally with more control than individual instance App Settings.
Beyond that, you could perhaps build segregation by limiting devs to pushing code only and not accessing the hosting environment (Azure portal, etc). The layer in between would be something like Azure DevOps or Github Actions that has access to Azure, while devs are limited to pushing code that triggers deployment.
Also worth reminding ourselves that devs ultimately have a lot of access by virtue of writing the code. If they want to get at runtime data, they can, somehow. If you consider the devs untrusted, you may have bigger problems. If it's just a matter of preventing mistakes, a solid devops process is the key.
I have arm template to recreate resource group with resources and their settings. This works fine.
Use case:
Some developer goes to azure portal and update some settings for some resource. Is there a way how to get exact changes that can be applied to my template to take these changes in effect? (Update template in source control)
If I go to automation script in resource group I can see all resources but my template in source control is different (parameters, conditions, variables, multiple templates linked together ...). I can't see on first look what changes were done and I can't use any diff.
Maybe I missed completely something but how are you solving this issue?
Thanks.
It is not easy to see any changes to resources by comparing templates from within the portal. Best practice is to always use ARM templates (and CI/CD pipelines) to deploy ARM templates to provision resources. Keep these ARM templates under source control to track them.
Further than that, I think you have two main options to track these changes:
1) You can use the Azure Activity Log to track the changes. The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. This includes a range of data, from Azure Resource Manager operational data to updates on Service Health events.
2) Write a little intelligent code against the Management Plane API. A good starting point is https://resources.azure.com/subscriptions. You could write a little extract that pulls all your resources out daily and commits them to a git repo. This will only update for changes to templates. You can then analyse the delta as or when you need.
Conceptionally, the developer should never 'go[es] to azure portal and update some settings for some resource', except for his own development / unit testing work. He then should produce an updated ARM template for deployment in the TST etc environments, and document his unit-tested changes with the new template. If his update collides with your resources in TST he will probably come to you to explain his changes, and discuss the resolution.
Let's say I have a mydev.database.windows.net Azure SQL Server and Azure SQL DW database for development. And I have a myprod.database.windows.net for prod. If I want to restore prod to dev (cross server) is that possible? From what I can see in the documentation (see the -TargetServerName switch documentation), it is not possible.
Are there recommended workarounds other than scripting out all the objects then using a Polybase CREATE EXTERNAL TABLE AS SELECT command to export all tables to blobs then import those tables with Polybase?
The recommended approach to cross server restores with Azure SQL Database (not DW) is to export to a bacpac file then restore, but I don't believe that's an option for Azure SQL DW right?
I may start creating prod and dev on the same Azure SQL Server (as long as the customer wants both in the same Azure subscription). I would prefer the servers be separate, but ease of restore is important.
This will depend on the frequency and freshness of the restores today. The simplest approach is to restore one of the snapshots we take in the background to support RPO. This is called geo-restore. Snapshots are taken at least every eight hours. However, in practice you will see these taken more frequently. As RPO improves over time so will the frequency of snapshots.
To perform a geo-restore of production into dev you can go to the portal and begin the provisioning process. In the provisioning blade for SQL DW select your dev server. Under select source choose "backup". This will extend the provisioning blade as you will need to then choose the backup you want to use. The rest should be straight forward.
If you need to do this much more frequently or against an "on demand" (i.e. times of your choosing) snap then you would need to build out custom code as you suggest. However, if you are ok to live with our snapshots then the geo-restore would be a good option.
The team are looking for customer feedback on RPO and backup / restore requirements. If you have a business need for more frequent snapshots to support a business case then the team would love to hear from you. Please post this on our user voice feedback channel or reach out to us directly at sqldwfeedback#microsoft.com if business sensitive.