We have multiple small projects within our organization. Ever since we adopted Azure DevOps recently, we started creating the individual Azure DevOps projects for everyone of these projects. The ALM process has been going on very well, with end to end traceability established with in the online tool.
However, as we get to end of each of these projects, we started to realize that these individual project and its code needs to be maintained further for the bug fixes and hotfixes. Unfortunately, Microsoft doesn't give a clear road-map for a project code, once it is completed.
Since we have a separate devops project for all the maintenance applications, this becomes much more confusing.
So, Could you suggest me on what is the best practice to maintain the code after the project is completed? Here are some options that I can think of.
Just keep the code in the same project and keep doing the bug fixes there. But, this will create an administrative nightmare to keep track of multiple projects for our maintenance application landscape (especially, since we already have a single projects with multiple repos and teams maintained).
Migrate the code from the completed project repository to the maintenance project. But, this again is a migration from one repo to another. So, I am not sure if this is right.
Related
We are planning to migrate over from TFS 2015 to Azure DevOps, and the task assigned to me is to find out the way to do a comparison after the migration between what we have on TFS and Azure to ensure that all the tasks, bugs, etc was successfully migrated over. I've checked with the Guide from Azure and found nothing about such post migration checking and comparison. Is there any tool for this or we can only do the whole checking and comparison manually?
There is no such tool.
I have never experienced a partial migration. Due to the way the import works that is also VERY unlikely. Either the complete import fails, or the data is going to be there. I've done many of these migrations as well as server migrations/upgrades and the kind of data-loss you're worried about has never happened.
The one thing you'd need to be careful of are the changes to the retention policies.
I am learning azure data factory and would really like to do its development in Visual studio environment. I have VS 2019 installed on my machine and I don't see an option to develop ADF in it.
Is there any version of VS that ADF can be developed in or we are right now stuck with developing it in web UI for the time?
I know BI development tools needed additional plug in to VS environment to work. Does ADF need something similar to that too.
If not, how can we back up our work done in web ADF. Is there an option to link it somehow with the azure repo or GIT?
Starting with ADF V2, development is really intended to be done completely in the web interface. I had the same question as you at the time, but now the web tools are quite good and I don't give it a second thought. While I'm sure there are other options for developing and deploying the ARM templates, do yourself a favor and use the web UI.
By default, Data Factory only saves code changes on "Publish". An optional configuration allows source control via Git integration. You can use either either Azure DevOps or Github. I highly recommend this approach, even if you only ever work in the main branch (fine for lone developers, a bad idea for collaboration). In this case, Publish takes the current state of the main branch and surfaces your artifacts to the ADF service. That means you will still need to Publish for your changes to be live.
NOTE: Git integration is also supported in Azure Synapse, where it has tremendous value for collaboration across a wide variety of artifact types.
So I am getting confused with what is the right approach to implementing code that fires on a scheduled bases within azure.
Originally we were using a standard console app that would be put in the webjob folder on deployment. I found this a bit noddy as we had logic looping and waiting for the right time to fire.
I then tried the azure webjob package https://github.com/Azure/azure-webjobs-sdk-extensions, but see this has gone quiet and the master branch is currently broken! I like because it has a CRON type approach with a function.cs, but now not sure if this is being maintained.
So do people have a preference on how a background process would run, e.g. a scheduled task that would run at 2am every day against a database?
Too much choice and not enough consensus on what the right way is?
Much appreciated in advance
I can think of three options, all of which are valid and can suit your needs. Which one to choose in the end comes down to your requirement specifics and your technical expertise.
WebJobs. These are the most powerful and most difficult to build and maintain. You typically use a dedicated project template in Visual Studio to author these. You can ignore that GitHub link - that's not what you need. Make sure you have the Azure workload enabled in Visual Studio and create a WebJob project.
Azure Functions. These are a more lightweight alternative to WebJobs. There is Visual Studio tooling available for this as well but you also have the option of writing your code directly in the portal. Azure Functions will time out after some period of time, so if your job runs more more than a minute or two this might not be the best option.
Logic Apps. This is more of a power user tool with an easy to use (debatable) designer interface. But it's also incredibly powerful and you can call WebJobs or Functions if you need to from a Logic App.
I could add links but I'm sure you could find them easily enough.
We have currently released our code to Production, and as a result have cut and branched to ensure we can support our current release, whilst still supporting hot-fixes without breaking the current release from any on-going development.
Here is our current structure:
Project-
/Development
/RC1
Until recently using Octopus we have had the following process:
Dev->Staging/Dev Test->UAT
Which works fine as we didn't have an actual release.
My question is how can Octopus support our new way of working?
Do we create a new/clone project in Octopus named RC1 and have CI from our RC1 branch into that? Then add/remove as appropriate as this RC's are no longer required?
Or is there another method that we've clearly missed out on?
It seems that most organisations that are striving for continuous something end up with a CI server and continuous deployment up to some manual sign off environment and then require continuous delivery to production. This generally leads to a branching strategy in order to isolate the release candidate to allow hot fixing.
I think a question like this raises more points for discussion, before trying to provide a one size fits all answer IMHO.
The kind of things that spring to mind are:
Do you have "source code" dependencies or binary ones for any shared components.
What level of integration / automated regression testing do you have.
Is your deployment orchestrated by TFS, or driven by a user in Octopus.
Is there a database as part of the application that needs consideration.
How is your application version numbering controlled.
What is your release cycle.
In the past where I've encountered this scenario, I would look towards a code promotion branching strategy which provides you with one branch to maintain in production - This has worked well where continuous deployment to production is not an option. You can find more branching strategies discussed on the ALM Rangers page on CodePlex
Developers / Testers can continually push code / features / bug fixes through staging / uat. At the point of release the Dev branch is merged to Release branch, which causes a release build and creates a nuget package. This should still be released to Octopus in exactly the same way, only it's a brand new release and not a promotion of a previous release. This would need to ensure that there is no clash on version numbering and so a strategy might be to have a difference in the major number - This would depend on your current setup. This does however, take an opinionated view that the deployment is orchestrated by the build server rather than Octopus Deploy. Primarily TeamCity / TFS calls out to the Ocotpus API, rather than a user choosing the build number in Octopus (we have been known to make mistakes)
ocoto.exe create-release --version GENERATED_BY_BUILD_SERVER
To me, the biggest question I ask clients is "What's the constraint that means you can't continuously deploy to production". Address that constraint (see theory of constraints) and you remove the need to work round an issue that needn't be there in the first place (not always that straight forward I know)
I would strongly advise that you don't clone projects in Octopus for different environments as it's counter intuitive. At the end of the day you're just telling Octopus to go and get this nuget package version for this app, and deploy it to this environment please. If you want to get the package from a different NuGet feed for release, then you could always make use of the custom binding on the NuGet field in Octopus and drive that by a scoped variable depending on the environment you're deploying to.
Step 1 - Setup two feeds
Step 2 - Scope some variables for those feeds
Step 3 - Consume the feed using a custom expression
I hope this helps
This is unfortunately something Octopus doesn't directly have - true support for branching (yet). It's on their roadmap for 3.1 under better branching support. They have been talking about this problem for some time now.
One idea that you mentioned would be to clone your project for each branch. You can do this under the "Settings" tab (on the right-hand side) in your project that you want to clone. This will allow you to duplicate your project and simply rename it to one of your branches - so one PreRelease or Release Candidate project and other is your mainline Dev (I would keep the same name of the project). I'm assuming you have everything in the same project group.
Alternatively you could just change your NuSpec files in your projects in different branches so that you could clearly see what's being deployed at the overview project page or on the dashboard. So for your RC branch, you could just add the suffix -release within the NuSpec in your RC branch which is legal (rules on Semantic Versioning talk about prereleases at rule #9). This way, you can use the same project but have different packages to deploy. If your targeted servers are the same, then this may be the "lighter" or simpler approach compared to cloning.
I blogged about how we do this here:
http://www.alexjamesbrown.com/blog/development/working-branch-deployments-tfs-octopus/
It's a bit of a hack, but in summary:
Create branch in TFS Create branch specific build definition
Create branch specific drop location for Octopack
Create branch specific Octopus Deployment Project (by cloning your ‘main’ deployment
Edit the newly cloned deployment, re-point the nuget feed location to your
branch specific output location, created in step 3
I'm using Visual Studio Online for my TFS needs, and I have a pretty big solution which contains several web projects.
How can I set up automatic deployment of a specific project in the solution to a specific website on Azure?
The default workflow used to deploy in VSO does not seem to handle this scenario.
The "first" web project found within the solution is chosen for deployment according to this discussion. Note that the discussion relates to git on VSO but it seems to hold true for builds using the VSO CI workflow.
According to the discussion changing the project names to influence the ordering should/might work but results seem mixed.
We chose to add a second solution only containing the web-project to deploy, its dependencies and tests. This will not work if there are dependencies on other web-projects.
Also take not of this article on a configuration-based approach, a question that this one might be a duplicate of or a question concerning actual deployment of multiple projects into one site.