Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Our application is currently an ASP.NET Core application hosted on Azure and our code and pipelines are hosted on Azure Dev Ops. The app is pretty simple with just a Web Application and Azure SQL Database.
We currently have a large number of tenants that we would like to deploy to after each release.
We currently have 3 Build Pipelines (which are triggered off dev, test and master branches):
- Dev
- Test
- Production
Where I currently get lost is on where to put the individual tenants, our current path was to make a "Release" pipeline for every tenant. Is this the best way to do this? Should we be using stages instead?
I'm a bit confused as to why you have separate build pipelines for dev, test, and production.
You might consider consolidating all of your pipelines (build and release) into a single YAML pipeline. Under this approach, you'd have 1 build stage which you would capture as a YAML stage template and make use of expressions/conditions to account for the variances between the various environments. Alternatively, if the build process varies widely between environments, you could have separate build stage templates for each and include the appropriate one based on branch that triggered the build.
For releases, you could capture each as a YAML template and then include them at the end of the pipeline using the new deployment job element that has been added to the YAML schema.
Hopefully this gets you closer to a solution or at least gives you some things to think about.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed yesterday.
Improve this question
I have seen many pros and cons of both Bitbucket anf Gitlab. I would like to know if Bitbucket supports decentralized workflow like Gitlab.
I am trying to design a workflow like Development-> Master->Staging->Production. Two questions -
Is this possible in Bitbucket to create production branch independently of Master like in Gitlab.
Also, which platform provides secure workflow with minimal coding error from Dev to Production env.
Opinion on Gitlab and Bitbucket, Need to choose a platform that provides safe coding and non centralized workflow
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 days ago.
Improve this question
Context
I'm creating a firebase functions (v2) web API and wanted to export the code to a different function depending on the current github branch I am in (deploying the function through Github Actions)
eg:
say I accept a PR to prod it should deploy to function-id-ts.a.run.app, but if I push my changes to dev branch x it should deploy the function to dev-x-function-id-ts.a.run.app
My plan was to achieve this by parsing an env variable to the script (either through the GitHub action or a local .env file, depending on where I am deploying it from) and just set exports[process.env.ENV_VARIABLE] = onRequest({options}, api)
Problem
If I do this, when I try deploying to firebase/gcp it will implicitly delete any functions that are not included in index.js, this would mean if I deploy to dev-x-function my function function will be deleted.
deletion docs
With implicit function deletion, firebase deploy parses index.js and removes from production any functions that have been removed from the file.
Questions
Is this an accepted/recommended method of managing having different environments (dev. staging, prod)?
If this is not the way I should be doing it, how should I approach this?
If this is the way I should be doing it, how would I go about deploying to functions
NB: If it is required I am more than happy to access it through/manage it through the gcp (currently manage the deployed function and view logs through it just use firebase as a proxy so it integrates nicely with the rest of the tools I am using)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to make sure I am taking the right approach.
I am building virtual environments in Azure on a regular anywhere from 3 to 5 servers at a time. Each server has 1 of 4 different resources (RAM/CPU/...) that it will need. Obviously I could script out each VM and just use powershell to deploy each individual VM each time.
More over what I want is a utility or webpage where I can say I need to create x servers and here are the specifications for them, how much will it cost and make it start building them.
Is there any tool like this or what would be the best approach to this?
You could automatically create Azure resources from a Resource Manager template. You create a template file that deploys the resources and a parameters file that supplies parameter values to the template.
Also, you could easily edit and deploy the template on the Azure portal. In this way, you could search Template----Deploy from a custom template---Build your own template in the editor. You could reuse the template after you save it. You could find multiple guidances and sample about the template what you want to deploy.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
All these Azure technologies (Bots, FaaS, Logic Apps and Runbooks) are used to run schadule jobs.
I don't know when we should use these and which scenario we should use them.
YMMV, but here are some pretty good rule of thumbs:
Are you doing PowerShell based Automation work? If Yes, consider Azure Automation Runbooks.
Are you building a bot? If Yes, consider the Azure Bot Framework service.
Are you build a workflow that executes on a timer, especially one that integrates with other services (etc.)? If Yes, consider Logic Apps.
Are you writing generic application code? If Yes, consider Azure Functions.
If none of those fit, I'd be surprised, but you might try starting with Azure Functions since we're kind of an "Everything as a Service", but there is a reason we have the different products - they specialize to enable better productivity within their specialty (Bots, Automation, and Integration).
Note: I'm one of the PMs on the Azure Functions team here at Microsoft.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We've recently set-up our resources in our Azure Portal and especially giving rights was a lot of work.
I wonder if we did it correctly and in the proper order. Our web application seems to work fine.
Next week our client wants us to set-up our environment in his account. This time I want to do it properly ;)
What is the best approach to do this? I don't want to bother him everytime I need to change something.
The situation:
2 developers. 1 of them (myself) needs to add extra resources.
1 resource group
Web app using the S1 plan
4 deployment slots. 2 will be created right away, the other two later
1 Storage account
1 SQL Server
1 Elastic pool
Several SQL databases. Some will be created by code.
The last time I needed to give my co-worker access to each resources in the project. I assume that can be done eassier.
What role do I need myself to be able to access all resources and create deployment slots, create databases and set-up continuous build?
Assuming that all the relevant resources are located in the same resource group (which is the recommended pattern), you just need to give Contributor access to the resource group, and it will apply to everything in it.