How can I automatically close stale issues in GitLab? - gitlab

Is there an out of the box feature to automatically close issues that have not had any activity for a specific period of time e.g. 4 weeks?
If not, what would be the best way to go about implementing this for my Group's issues?

It does not exist per se but you can prepare a script to run into a cronjob or similar tool, so you regularly clean these issues. The script could use the GitLab Issue API, and check issue dates to determine whether to close a specific issue or not. The API has all the required tools for you to make this script with the described logic.

I'm not familier with such option but you can look into Issues list and sort by created or updated.

This solution uses the python-gitlab package. It gets all group issues, adds a comment to those that have been inactive and closes them.
Only prerequisites are to
get a PRIVATE_TOKEN and add it to your environment
find out your group ID and add it below
import datetime
import os
import gitlab
stale_before = datetime.date.today() - datetime.timedelta(days=28)
gl = gitlab.Gitlab(
url="https://gitlab.example.com", private_token=os.environ["PRIVATE_TOKEN"]
)
group = gl.groups.get(123) # your group ID
issues = group.issues.list(all=True, state="opened")
for issue in issues:
updated_at = datetime.datetime.fromisoformat(issue.updated_at).date()
if updated_at < stale_before:
print(f"Closing issue #{issue.iid} (last activity on {updated_at}).")
issue.notes.create({"body": "Closing for inactivity."})
issue.state_event = "close"
issue.save()

Related

Azure ML release bug AZUREML_COMPUTE_USE COMMON_RUNTIME

On 2021-10-13 in our application in Azure ML platform we get this new error that causes failures in pipeline steps - python module import failures - warning stack <- warning that leads to pipeline runtime error
we needed to set it to false. Why is it failing? What exactly are exact (and long term) consequences when opting out? Also, Azure ML users - do you think it was rolled out appropriately?
Try to add into your envirnoment new variable like this:
environment.environment_variables = {"AZUREML_COMPUTE_USE_COMMON_RUNTIME":"false"}
Long term (throughout 2022), AzureML will be fully migrating to the new Common Runtime on AmlCompute. Short term, this change is a large undertaking, and we're on the lookout for tricky functionality of the old Compute Runtime we're not yet handling correctly.
One small note on disabling Common Runtime, it can be more efficient (avoids an Environment rebuild) to add the environment variable directly to the RunConfig:
run_config.environment_variables["AZUREML_COMPUTE_USE_COMMON_RUNTIME"] = "false"
We'd like to get more details about the import failures, so we can fix the regression. Are you setting the PYTHONPATH environment variable to make your custom scripts importable? If so, this is something we're aware isn't working as expected and are looking to fix it within the next two weeks.
We identified the issue and have rolled out a hotfix on our end addressing the issue. There are two problems that could've caused the import issue. One is that we are overwriting the PYTHONPATH environment variable. The second is that we are not adding the python script's containing directory to python's module search path if the containing directory is not the current working directory.
It would be great if you can please try again without setting the AZUREML_COMPUTE_USE_COMMON_RUNTIME environment variable and see if the problem is still there. If it is, please reply to either Lucas's thread or mine with a minimal repro or description of where the module you are trying to import is located at in relation to the script being run and the root of the snapshot (which is the current working directory).

Launch automation script prior to WO being closed?

In Maximo 7.6.1.1:
Is it possible to launch an automation script to update a WO -- just prior to the WO being closed?
The Change Status action seems to happen before any of the launch points that I've tried.
And of course, once a WO is closed, I can't edit the WO with an automation script, since it is flagged as Is History.
Which launch points have you tried? I think the earliest you can get is Attribute-Validate, where the status value will still have changed, but the action of setting historyflag should not have happened, yet. But if that's not working for you, you might be out of luck, unless you're willing to customize the WORKORDER object with Java.
You should be good to go with an attribute launchpoint script on the workorder.status attribute, event ACTION. I've done it before, I could for example call an API to check funds associated with GL account and block the status change, if necessary.
You just need to check for the current value so other status changes won't be affected.
Python example:
if mbo.getString("status") == 'CLOSE':
#... your code ...
Also, remember that you can always use the NOACCESSCHECK flag to change a mbo.
See MboConstants class: https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
from psdi.mbo import MboConstants
mbo.setValue("attribute", value, MboConstants.NOACCESSCHECK)

Import Failure - Role With Id Does Not Exist

I am getting an import error in a specific environment with a managed CRM 2011 solution. The solution has been imported before into many other environments, but the one in particular where it is failing is throwing the following error:
Dependency Calculation
role With Id = 9e2d2d9b-645f-409f-b31d-3a9c39fcc340 Does Not Exist
I am a bit confused about this. I searched within the solution XML and was not able to find any reference to this particular GUID of 9e2d2d9b-645f-409f-b31d-3a9c39fcc340. I cannot really find it in SQL either, just wandering through the different tables, but perhaps I do not know exactly where to look there.
I have tried importing the solution multiple times. As a desperation effort, I tried renaming all of the security roles in the destination environment prior to importing, but this did not help.
Where is this reference to a security role actually stored? Is this something that is supposed to be within my solution--which my existing CRM deployment is expecting me to import?
How do I fix the problem so that I am able to import this solution?
This is the code we used to fix the issue. We had to run two different scripts. Script A we had to run a total of four times. Run it once, attempt the import, and then consult the log to find the role that is causing the problem--if you receive another error for another role.
To run script A, you must use a valid RoleTemplateId from your database. We just picked a random one. It should not matter precisely which one you use, because you will erase that data element with script B.
After all of the roles are fixed, we got a different error (complaining something about the RoleTemplateId was already related to a role), and had to run script B. That removes the RoleTemplateId from multiple different roles and sets it to NULL.
Script A:
insert into RoleBaseIds(RoleId)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234')
insert into RoleBase(RoleId
,RoleTemplateId
,OrganizationId
,Name
,BusinessUnitId
,CreatedOn
,ModifiedOn
,CreatedBy
)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
,'ABCD89FF-7C35-4D69-9900-999C3F605678'
,(select organizationid from Organization)
,'ROLE IMPORT FIX'
,(select BusinessUnitID from BusinessUnit where ParentBusinessUnitId is null)
,GETDATE()
,GETDATE()
,null
)
Script B:
update RoleBase
set RoleTemplateId = NULL
where RoleTemplateID='ABCD89FF-7C35-4D69-9900-999C3F605678'
Perfect solution, worked for me! My only comment would be the error in Script B: it shouldn't clear the template IDs of all roles for the given template, only the template ID of the newly created "fix" role, as follows:
update RoleBase
set RoleTemplateId = NULL
where RoleID='WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
I would've gladly put this in a comment to the answer, but not enough rep as of now.

JIRA Groovy - Link issues from another project

We have a special Jira setup so that 1 Epic in our master project = 1 Jira project full of stories/bugs/etc (Governance/compliance wanted this)
Each Issue type has a custom column called 'Ideal days'
For each Epic to get the total estimated days we have a custom function that Sums all 'Ideal days' of issues linked to that Epic that are in the backlog (we manually do this).
With Groovy Runner can I auto-link all issues in a project (in the backlog) to an Epic? Thanks
If you want to link issues without cloning them, you should use IssueLinkManager.getLinkCollection().
The code should look something something like this:
IssueLinkManager issueLinkManager = ComponentManager.getInstance().getIssueLinkManager()
issueLinkManager.getLinkCollection(someIssue, yourUser).each() {
it.getOutwardIssues("Some Link Name")
}
There is a built-in script that clones an issue and links it. You should be able to link based on that code: https://jamieechlin.atlassian.net/wiki/display/GRV/Built-In+Scripts#Built-InScripts-Clonesanissueandlinks

Links to issues from tortosiesvn log to issue tracker (Flyspray)

We use SVN with TSVN and Flyspray as our issue tracker. The issue tracker itself shouldn't be important from what I've understood from docs.
So I've set up bugtraq:url to:
http://our.server.pl/flyspray/index.php?do=details&task_id=%BUGID%
And bugtraq:logregex to:
[Ff][Ss]#(\d+)
Commited and Updated from repo., but only a number is linked in the log messages. Is it possible to have whole sequence linked?
Issue Numbers Using Regular Expressions chapter from docs may help to write good RE
Note 2 RE - one for message, one for id, and message RE is inside ()

Resources