No Agent Queue Found - azure

I recently exported and imported a VSTS build process definition to create a similar build for a similar project. However, when I try to save the definition, VSTS displays an error that:
"No agent queue was found with identifier x."
Does anyone know of a cause? I looked at some other online posts and they were related to security settings, which are all correct.
There are also a few related to building capabilities, but this is not that exception.

When you import a build definition, it pulls across all the values from drop-downs, including the "Default agent queue". For projects in the same VSTS account, this value ("Hosted" in my case) will have the same name in the original definition and the new definition you are creating. However, the IDs may be different or not "hooked up" correctly by the import process.
Select the "Process" header before the beginning of your task list.
Click the "Default agent queue" drop-down.
Select your "other" agent pool that has the same name. In my case, there were 2 items named "Hosted". I picked the one that was not already selected.
Now your build will save and queue.

Related

How do I preserve creator / assignment / commenter associations when a self-hosted GitLab export is imported on GitLab.com?

I have a handful of projects on a self-hosted GitLab instance which I am losing the ability to use. I've decided to moved these projects to GitLab.com, in part because of how easy I expect the migration process to be. Since it is GitLab on both sides, I hoped that the imported project would be essentially indistinguishable from the exported project.
I have exported the projects from the GitLab instance where they were previously hosted. I have also done a couple trial imports onto GitLab.com.
First, I tried importing a project with no other setup. I used GitLab.com's big green "New Project" button, selected "Import project", selected "GitLab export", and provided a project name and the export .tar.gz file.
The import seemed largely successful but with one significant shortcoming. Every issue is marked as having been reported by me. Every issue is assigned to me. Every comment is marked as having been made by me - but with an extra line at the end explaining who actually made the comment.
This clearly failed to meet the hopes I described at the top - since ownership and authorship history has been practically destroyed.
Next, I had one of my collaborators on the original self-hosted GitLab instance create a GitLab.com account. I also had them associate the same email they used in the self-hosted GitLab instance with their GitLab.com account. Then I followed the same import process.
The outcome of this import was largely indistinguishable from the first attempt.
Is it possible to preserve ownership and authorship information for this kind of migration? If so, how?
It is not possible to independently perform "User Mapping" for an import into GitLab.com. Instead, GitLab.com support needs to be engaged to perform the import.
This is because the "Administrator" access level required by GitLab is not available to any customers of GitLab.com.
However, as of March 31st, 2021, GitLab.com support is so overwhelmed that after almost three months of waiting for them to perform this import for my projects, they have closed my support request essentially as "won't fix" because "we're too busy".
Thus, in practice, it is not possible to perform this User Mapping on GitLab.com at all.
https://about.gitlab.com/handbook/support/workflows/importing_projects.html#user-mapping

How to create bug or Notification in only one task/Job when other task/Job failed in an Agent in devops in Release Pipe line

I have an pipeline which will have few task mentioned in the image. I'm creating a bug work item when a particular task failed which is working fine using logic app.
Now my problem is I don't want to add every time new task for bug creation after each deployment task mentioned in the image.
Is there any way I can create only one bug work item based on failure in any of the task in the pipeline. may be in the last or somewhere..?
Not sure why you had to go the Logic app route as there is an option to do this with Azure Pipelines itself out of the box.
Navigate to {your pipeline} > Options as shown below:
If the build pipeline fails, you can automatically create a work item to track getting the problem fixed. You can specify the work item type. You can also select if you want to assign the work item to the requestor. For example, if this is a CI build, and a team member checks in some code that breaks the build, then the work item is assigned to that person.
Additional Fields: You can also set the value of other work item fields. For example:
Field Value
------- -------
System.Title Build $(Build.BuildNumber) failed
System.Reason Build failure
Check Build Options for more details.
UPDATE:
Doing this for Release Pipelines is not supported as an out of the box feature as of today. However, there are extensions available in the Visual Studio marketplace that can be used as alternatives until it is supported.
Here are two such extensions:
Create Bug on Release failure
Create Work Item
Another idea with PowerShell tasks is discussed here.

Configuration Version is missing Terraform Cloud

I set up a workspace and I am following the Enforce Policy with Sentinel hands on guide.
I see the following message in the run tab:
As soon as I try to press the queue plan button I receive this error:
My configured variables are:
Is there something else I need to configure to be able to queue a plan?
Executing from the cli I was able to trigger a run (in TF Cloud) that only included the plan step. The run execution can be viewed if I access the specific run url directly.
Any help, suggestions are more than welcome!
I guess you've already resolved this issue, but I post my resolution anyway.
When this issue occurred, I had wrong workspace's name settings.
There were two workspaces with similar names, and I don't know why this happened. Then I deleted one of them.
On the other one occurred this issue.
In the end, when I deleted the workspace and recreated it, I didn't have the issue occurring anymore.
In my case I didn't have duplicate / similar names workspaces (only one workspace). I found that after running terraform apply locally once first, the UI controls started to work as expected.

Stuck with CI/CD deploy of Linux Consumption based Azure Function

I made an Azure Function (Python), hosted with a Linux Consumption plan. The App Engine is located into a dev ressource group. I would now like to be able to deploy it (and subsequent changes) to the staging and prod ressource groups. The documentation on the many differents ways to do it has got me confused, especially with the fact that most of the deployment methods (deployment slots...) are not available with the Linux Consumption plan and I have no use for the Premium one. I thought of setting up a version control but I cannot link my Azure Function to an Azure DevOps repo (Deploy Center is disabled, grey).
How would you do it? Ideally with Azure DevOps.
Thanks in advance
First of, you need to make sure that you have access to the subscription in question, with the resource groups and function apps.
Build
On DevOps, for the project containing the repository, go to Pipelines > Pipelines (highlighted with red).In the top-right corner, you should be able to see and click "New pipeline". I'll be doing an "Azure Repos Git (YAML)" pipeline in this answer, so you might as well go along with. If you have any other particular preferences, then just make sure you change what needs to be changed accordingly. Our goal is basically just to publish an artifact from our build-process, which will in turn be consumed by a "Release pipeline".
Moving along, for the build pipeline, choosing "Azure Repos Git (YAML)" will prompt you to choose which repository in your project, that will 1) contain the YAML file we're about to create, and 2) have the source code available for the pipeline. Without going into too much detail, it is also possible to place all yaml-pipeline files into its own repository, and then include (via resources) the repositories containing the source code.
Next Step is to "Configure your pipeline". There is actually a "Python Function App to Linux on Azure" template available. However, it contains deployment stages as well, and I generally always put all deployment related into my "Release pipeline". For now, though, I went with the "Starter pipeline".
An online editor will actually pop up. Towards the top of the editro, you'll see the repository's name and a "azure-pipelines.yml". Click on the "azure-pipelines.yml" to rename the pipeline, as well as the name of the yaml-file, that'll end up in the repository's root.
I've put up a version of the aforementioned template, boiled down to what is necessary, and it's available here. Simply delete whatever is already in the "starter pipeline", and copy-paste the contents of the pastebin, into the pipeline.
When you save the pipeline, you probably want to put it into a different branch to begin with instead of your master branch (it will prompt you for it), and then create a PR. Accept the PR when the pipeline works (you can run the pipeline using your newly created branch). When the build pipeline successfully runs without errors, you should be able to see an artifact published, if you navigate to your successful run's overview (highlighted with red). You can click and examine the contents to check if they are as expected.
Release
Go to "Releases" (highlighted with green, first picture). From here, you should be able to see and click a "+ New"-button.
It will immediately prompt you to select what type of job you want. Just click "Empty job" to begin with.
First choose an artifact to consume. Click the "Add an artifact"-box to the left. Find the pipeline you just created from the drop-down list. You can configure the version to use (if you have certain preferences), and give the artifact an alias that can be used throughout the release pipeline.
Next is to setup your stages. You want 3 stages: a stage for development, a stage for staging, and lastly, a stage for production. Currently you should have a "Stage 1". If you hover above the stage, you can see a "+"-sign below the box. Click it to add a stage. Choose empty job again. Repeat this for the newly created stage box (hover, click +, add empty job).
You should now have something like this:
Let's start by configuring stage 1. Click the stage (the box itself), and name it "Development" or something of your preference. Then click the "1 job, 0 task"-link. Click the "Agent job"-box, and configure the agent job, as you see fit (make sure the agent downloads your artifact, it can be configured in the "Artifact download").
Next, click the "+"-sign on the agent job you just configured. From the prompt, use the search-bar to find "Azure Functions". Note, there are 3 jobs called this. You want the one that is just called "Azure Functions". Click and configure the newly created job. It should really be straight-forward here. Pick function app on linux and find your "development" function app from the list. The "Package or folder" should be something like $(System.DefaultWorkingDirectory)/**/*.zip by default, and it should suffice, unless you have done some customization to your build pipeline's artifact.
You should have something like this:
From the "Tasks"-dropdown (with the red warning circle), you can move to "Stage 2" (you'll of course rename this as you did with "Stage 1" to "Development"). Since you're not using slots, swapping is unfortunately not possible between 2 function apps, in two different resource groups - at least not by my knowledge. So you'll have to repeat the entire process from the "Development" stage, where you use the artifact to deploy to the function app in the staging resource group. The same goes for your last stage "Stage 3", where you deploy to your function app in your production resource group.
Staging and approval
What we've been waiting for, I imagine. From the picture with the stages overview, you can see that each stage has 2 attached "buttons" on each side of the box. When with a lightning and a user icon (left), and one with just a user icon. The one on the left is "pre-stage actions", while the on the right is "post-stage actions". In your scenario, you probably want to configure "pre-stage actions" for your "Stage 2"/"Staging" and for "Stage 3"/"Production". In both cases, I'd add "Pre-deployment approvals", like this:
You can add specific persons, or entire groups. It will require that someone then goes to the release pipeline overview, and then approves the next stage, before it will be deployed (or rather, the stage won't start before it has been approved).
Phew, that was a long one... I hope this cleared some of the confusion you have had, and that it works out for you.

Perforce workspace create issue

While creating workspace in perforce, I got below error
You should define workspace view in more detail. (minimum 2 depth)
This is not a standard Perforce error, and is therefore most likely coming from a custom trigger set up by your Perforce admin. In order to resolve a custom trigger failure you will need to consult with your Perforce administrator (i.e. the person who defined the trigger) to determine what conditions are required to satisfy the trigger.
If you would like to learn more about how to define triggers, see https://www.perforce.com/manuals/p4sag/Content/P4SAG/chapter.scripting.html
(this is not useful to you as an end user encountering a trigger failure, but may provide additional context on how triggers work from your admin's perspective).
The workaround/fix i got is,
make sure to select a folder in "Workspace root" , which has last two level of empty folders,
for example,
Suppose you choose "C:\Users\stackuser\workspace\project\codes", then make sure, "project and codes are two empty folders

Resources