We are trying to migrate our old issue "tracking" system to GitLab.
For legacy reasons the issues have relatively large numbers 800 and above and they are not consecutive.
However for backreference it would be great if we could have one number for each issue and not an "old" and a "new" number, as in some contexts issues are referred to by number (e.g. external parties who in the future also will use GitLab)
I found this Set Minimum Issue Number in Gitlab where issues were created to "fill" the gaps. However this creates a lot of clutter (especially E-Mails Gitlab API - Create issue quietly?).
Any ideas how to solve this?
The ideal flow would be:
use gitlab-api to create issues we have and
add a parameter so set the number of the issue.
When using gitlab, the numbers are filled up by new issues over time,
or they count up from the highest issue number currently in the
project.
If I could actually set the issue number afterwards in the database (as was hinted in the linked question above), how will gitlab handle this? (I don't even know where to start looking in the gitlab code base, any hints on that might also answer this question).
Thanks in advance for any advice on how to tackle this.
I found a trick to do that.
There is exporting to .csv from the GitLab and .csv is an editable format. It can be edited in any excel-like program, or even in a notepad.
Make a first issue, fill it with dummy data (I suggest filling as that's a way to spot which field is responsible for what).
Then fill your data by any way suitable, preferably mass-exported from the previous issue tracker and import back to the GitLab.
If done correctly, this should be possible to be done partly automatically.
Related
The question here is: Am I on the right path (this is the first time I'm trying this), and if not, what would be smarter to try? If this is the right path, can you offer suggestions on how to do this best, because if this works, I am going to use it often on a lot of different tasks in this app.
I'm running a PowerApps Canvas app. As part of its program, I want it to be able to reference (read-only) a collection of data. That data is in ServiceNow, and my group is not permitted to access ServiceNow using the API.
During testing of the app, I just had it reference a SharePoint list (which I had filled with some dummy data), but I can re-code those lines as needed to pull from some other data source.
Because I am touching a few different systems here, I am not sure if this is the right way to go and I'm afraid I'll spend too long trying and find out that it would never have worked because of x. Thus my question.
This is what I think will work. Am I headed in the right direction?
Set up the scheduled report in ServiceNow. (Done!)
Program ServiceNow to email the Excel file output. Make sure it is
always the same title. (Done!)
Build a Power Automate flow to capture that email and save the
attached file to a location (OneDrive?) that can be accessed by the
app. If there is a file there already, delete it first.
Add the Excel file as a data source to the app, and start
referencing it as needed.
8-12 hours later, ServiceNow pushes out another scheduled data
drop, and the whole thing updates again.
In my perfect world, this system would work completely unattended.
Offhand, a glitch I can see is that ServiceNow generates an Excel file, but it's not a table, and PowerApps I think must ingest as a data source an Excel file that is a table. But (shrug) I might be wrong.
Am I thinking of this correctly? Is this the best avenue to follow?
Our evolution of using DevOps is continuing (slowly but surely). One thing we've noticed is that some people are trying to but excessive estimates in for their time, but what we really want to be encouraging is for people to be breaking work down into multiple tasks.
Is there a way that we can set our DevOps work items to only accept a maximum value? I've had a look at the 'rules' and there doesn't seem to be anything there to let us do this, and because it's an out of the box field I don't think we can put a value limit against it.
I suppose what I want to understand is whether it would be possible to do this in some way? Could I do something with the existing 'Original Estimate' field or would I have to create a new custom field to have any chance of preventing people from putting in 100 hours for something that's actually more like 2?
If you are also using Boards, you could highlight work items where the original estimate is higher than a certain value. This would not prevent setting these values, but rather encourage the users to put in lower values.
https://learn.microsoft.com/en-us/azure/devops/boards/boards/customize-cards?view=azure-devops
Beware that this might not really help the underlying issue: People must be convinced of the benefits of splitting up tasks, otherwise they will just work around the tooling. Like always putting in the maximum value or not putting in the actual work hours.
Is there a way that we can set our DevOps work items to only accept a
maximum value?
I am afraid that setting the value limit for the Original Estimates field is currently not supported.
As workaround, you could need to create a custom field of type Picklist, and then specify the available values in the picklist.
You could add your request for this feature on our UserVoice site , which is our main forum for product suggestions.After suggest raised, you can vote and add your comments for this feedback. The product team would provide the updates if they view it.
As of late, I have encountered a problem with my view index being rebuilt all the time and users are having massive issues with this particular view.
I figured it was due to #Date in my selection formula aswell as one of my column formulas. This way the selection formula would be different every second that passes.
So I figured, since I dont need hours/minutes/seconds in my formulas, I would use #Today. This worked out well for 2-3 days and after that the same problem occured again.
So since the problem is back again, I'm not quite sure if that even causes the problem. When this particular view is open, I have issues in every tab that's open in notes, not only this specific database.
Is this a common/known issue? What can I do to avoid this problem?
Yes, it's a common issue that has been well known since the very early days of Notes more than 20 years ago.
#Date is not a problem on its own. #Now and #Today are both problems.
Using #TextToTime("Today") was a popular workaround that was discovered early on. This hid the problem from the indexer so the server failed to realize that the view was out of date. It doesn't solve the underlying problem, though, which is that the view is trying to do something that views simply aren't designed to do. Views are intended to be static, requiring update only when documents change. Introducing time into a selection or column formula makes them dynamic, which kills that presumption and is a major source of performance problems. Using this workaround requires that the view be fully rebuilt every night. You can do that by setting the view index options to "Manual", and setting up a program document to run an updall command with the -T option for the specific database and view once per night. Note that if your users are spread out across timezones, you'll have to pick one specific time as your standard, and if you have servers spread out across timezones you're going to have a lot of fun figuring out how to make them all show the same documents in the view at all times - but that's common to pretty much all approaches to the problem.
See this IBM Technote for a description of several other options that people have used over the years, with their pros and cons. Also see this article by Andre Guirard, which covers date/time issues in great detail.
I would add that the agent-and-folder solution that they describe in the Technote was generally my preferred approach, but it does have an additional disadvantage that they don't mention: it can eventually lead to an obscure situation where the server throws an error "Folder is larger than supported". This error actually has nothing to do with the size of the folder in documents; it refers to fragmentation of internal structures that occurs as large numbers of documents are moved in and out of the folder over time. It could only be fixed by deleting and re-creating the folder, which you can do in your agent code. I believe this problem may be fixed in more recent versions of Domino, but it caused me a lot of grief back in the Notes 6 and 7 timeframes.
We are currently in the process of upgrading from TFS 2008 to TFS 2012. When TFS 2008 was set up, the people involved didn't understand a lot of what the work item fields were for, and we ended up with very heavily customised templates and in fact lost a lot of default fields. As part of the upgrade to 2012 we are trying to return to the out of the box templates as much as possible to ensure we get to use as many of the features as possible, however there are a small number of custom fields that we need to include for reporting purposes.
Our product development process involves a roadmap for upcoming releases which includes new work as well as bug fixes. When a bug is assigned to be worked on by the developers we would like to be able to choose which release we're targeting the fix for - as far as I can see, Iteration is best suited for this. At the point the bug is closed though, we would also like to track what release it was actually fixed in, since things often get bumped from one release to the next if higher priority bugs or change requests come in, but this is where we come unstuck since I can't seem to assign Iteration to both fields such that the two show different values.
If possible we would prefer not to have global lists that have to be constantly updated with release numbers across our product range (we have around 8 different products which are constantly in development, each with their own release numbers), and leaving one of them as a text field leaves open the possibility that we will get inconsistencies in what people enter, eg 1.01 versus 1.1 which will show up in reporting as 2 different releases. As the fields are just looking up a set of values in the background, is there no way that the iteration list can be used twice? Or does someone have an alternative suggestion as to how we get round this?
What I think I'd suggest in this case is using a COPY rule on a state change event, so that when you move your work item into the Closed state, it would populate your custom field with the value currently in your Iteration field.
This would give you a snapshot of the value at the right point in time which then wouldn't be altered if the iteration was later changed, along with a history entry if it was opened & closed multiple times over its lifetime.
As iteration is time limited and release is perpetual there is an inherent mismatch of purpose with using iteration here. Iteration is for planning.
You would be better creating a release list with the version that you release.
If you are sprinting for example you may not know up front which release you will end up on before you start. If you are not sprinting then you are just kidding yourself that your know.
On a particular project we're working with a total of 10 team members.
After about a year working on the project (and using Mantis as a bug-/feature-tracker eversince), the bugtracker gets more and more difficult to use, as no standard has been setup that explains how to create new tasks, how to comment tasks etc. This leads to multiple entries for the same bugs, inability to easily find bugs when searching for them etc.
How do you organize your bugtracker? Do you use a lot of (sub)categories for different portions of your application (GUI, Backend etc), do you use tags in the title of tasks (i.e. "[GUI][OptionPage] The error")?
Is anyone in your team allowed to introduce new tasks or is this step channeled through a single "Mantis-master" (who would then know whether a new report is a duplicate or an entirely new entry)?
Always link a version control system commit to an issue and back so that you know which commits were made do solve which issue and why a certain commit was done.
What we did is to introduce a role for approve entries to the bug tracker. This role can be shared by different people. The process is either to approve, to approve with a small edit, or to reject the entry with the request for further editing or clarification.
It is better for the general understanding if the role is not given to people working in the (core) team.
In a "large" mantis system on the open web, I've seen the rules go something like
New: Anyone can enter a bug.
Acknowledged: A select few people can upgrade it to this level. These people have seen every new bug for a while, and thus they'll know if it's a duplicate. Or they can pass it back to the reporter for clarification until they understand it well enough to do this job.
Confirmed: Set by decision makers who basically say "We will be doing this".
I don't actually remember where it was, and more importantly I don't know how well it worked.