How do you know the current state of a PR, whether it can be merged, and if not, why? - github-api

I created a PR, and this branch has a robot check, when the PR has been audited, but the robot check failed, how do I know through the API whether the current PR can be merged, what is the reason for the failure?
Like the following:
exp
This information is not visible in the API:GET /repos/:owner/:repo/pulls/:pull_number

Related

Do I need FIFO SQS for jira like board view app

Currently I am running a jira like board-stage-card management app on AWS ECS with 8 tasks. When a card is moved from one column/stage to another, I look for the current stage object for that card remove card from that stage and add card to the destination stage object. This is working so far because I am always looking for the actual card's stage in the Postgres database not base on what frontend think that card belongs to.
Question:
Is it safe to say that even when multiple users move the same card to different stages, but query would still happen one after the other and data will not corrupt? (such as duplicates)
If there is still a chance data can be corrupted. Is it a good option to use SQS FIFO to send message to a lambda and handle each card movement in sequence ?
Any other reason I should use SQS in this case ? or is SQS not applicable at all here?
The most important question here is: what do you want to happen?
Looking at the state of a card in the database, and acting on that is only "wrong" if it doesn't implement the behavior you want. It's true that if the UI can get out of sync with the database, then users might not always get the result they were expecting - but that's all.
Consider likelihood and consequences:
How likely is it that two or more people will update the same card, at the same time, to different stages?
And what is the consequence if they do?
If the board is being used by a 20 person project team, then I'd say the chances were 'low/medium', and if they are paying attention to the board they'll see the unexpected change and have a discussion - because clearly they disagree (or someone moved it to the wrong stage by accident).
So in that situation, I don't think you have a massive problem - as long as the system behavior is what you want (see my further responses below). On the other hand, if your board solution is being used to help operate a nuclear missile launch control system then I don't think your system is safe enough :)
Is it safe to say that even when multiple users move the same card to
different stages, but query would still happen one after the other and
data will not corrupt? (such as duplicates)
Yes the query will still happen - on the assumption:
That the database query looks up the card based on some stable identifier (e.g. CardID), and
that having successfully retrieved the card, your logic moves it to whatever destination stage is specified - implying there's no rules or state machine that might prohibit certain specific state transitions (e.g. moving from stage 1 to 2 is ok, but moving from stage 2 to 1 is not).
Regarding your second question:
If there is still a chance data can be corrupted.
It depends on what you mean by 'corruption'. Data corruption is when unintended changes occur in data, and which usually make it unusable (un-processable, un-readable, etc) or useless (processable but incorrect). In your case it's more likely that your system would work properly, and that the data would not be corrupted (it remains processable, and the resulting state of the data is exactly what the system intended it to be), but simply that the results the users see might not be what they were expecting.
Is it a good option
to use SQS FIFO to send message to a lambda and handle each card
movement in sequence ?
A FIFO queue would only ensure that requests were processed in the order in which they were received by the queue. Whether or not this is "good" depends on the most important question (first sentence of this answer).
Assuming the assumptions I provided above are correct: there is no state machine logic being enforced, and the card is found and processed via its ID, then all that will happen is that the last request will be the final state. E.g.:
Card State: Card.CardID = 001; Stage = 1.
3 requests then get lodged into the FIFO queue in this order:
User A - Move CardID 001 to Stage 2.
User B - Move CardID 001 to Stage 4.
User C - Move CardID 001 to Stage 3.
Resulting Card State: Card.CardID = 001; Stage = 3.
That's "good" if you want the most recent request to be the result.
Any other reason I should use SQS in this case ? or is SQS not
applicable at all here?
The only thing I can think of is that you would be able to store a "history", that way users could see all the recent changes to a card. This would do two things:
Prove that the system processed the requests correctly (according to what it was told to do, and it's logic).
Allow users to see who did what, and discuss.
To implement that, you just need to record all relevant changes to the card, in the right order. The thing is, the database can probably do that on it's own, so use of SQS is still debatable, all the queue will do is maybe help avoid deadlocks.
Update - RE Duplicate Cards
You'd have to check the documentation for SQS to see if it can evaluate queue items and remove duplicates.
Assuming it doesn't, you'll have to build something to handle that separately. All I can think of right now is to check for duplicates before adding them to the queue - because once that are there it's probably too late.
One idea:
Establish a component in your code which acts as the proxy/façade for the queue.
Make it smart in that it knows about recent card actions ("recent" is whatever you think it needs to be).
A new card action comes it, it does a quick check to see if it has any other "recent" duplicate card actions, and if yes, decides what to do.
One approach would be a very simple in-memory collection, and cycle out old items as fast as you dare to. "Recent", in terms of the lifetime of items in this collection, doesn't have to be the same as how long it takes for items to get through the queue - it just needs to be long enough to satisfy yourself there's no obvious duplicate.
I can see such a set-up working, but potentially being quite problematic - so if you do it, keep it as simple as possible. ("Simple" meaning: functionally as narrowly-focused as possible).
Sizing will be a consideration - how many items are you processing a minute?
Operational considerations - if it's in-memory it'll be easy to lose (service restarts or whatever), so design the overall system in such a way that if that part goes down, or the list is flushed, items still get added to the queue and things keep working regardless.
While you are right that a Fifo Queue would be best here, I think your design isn't ideal or even workable in some situation.
Let's say user 1 has an application state where the card is in stage 1 and he moves it to stage 2. An SQS message will indicate "move the card from stage 1 to stage 2". User 2 has the same initial state where card 1 is in stage 1. User 2 wants to move the card to stage 3, so an SQS message will contain the instruction "move the card from stage 1 to stage 3". But this won't work since you can't find the card in stage 1 anymore!
In this use case, I think a classic API design is best where an API call is made to request the move. In the above case, your API should error out indicating that the card is no longer in the state the user expected it to be in. The application can then reload the current state for that card and allow the user to try again.

How can I lock the repo so that I'm next in line to merge?

We have a large team of > 50 developers at work. Code is merged to master via Merge Requests (MR), and each merge request must be reviewed and approved by any other teammate. The problem is that I am sometimes stuck in a MR/rebase race. I try to merge, fail, someone else snuck in an MR before me. So I rebase. Try to merge, fail, someone else snuck in before me. So I rebase. Sometimes it takes 2-3 rebases before I win the race.
Is there a way in GitLab to "reserve" the right to merge so that you are guaranteed next in line? Or at least say: "You can't merge because another user has reserved a file you are trying to modify". It seems silly that I have to keep wasting time rolling the dice to see if my code merges.
The VCS we used before (ClearCase) had this exact mechanism. You could "reserve" files such that only you were allowed to check them in. Anyone else would be rejected (besides admins).
GitLab EE allows file locking and informs during the Merge Request that the file is locked:
When a user that is not the author of the locked state of a file
accepts a merge request, an error message will appear stating that the
file is locked.
Merge requests can also be prioritized using labels or issue deadlines. You can check this question about merge request priority and how GitLab community are organizing them

P4 triggers: Can I Get Changelist Number And Still Have Option To Revert?

I'm trying to write a P4 trigger that sends the changelist number to a downstream reporting tool, when a user commits. As I understand it, I need to use the change-commit event, to get the final changelist number. It appears change-submit does not have access to the altered changelist number if the p4 server had to change it ("The Perforce service might renumber a changelist when you submit it, depending on other users' activities")
However, I would also like to revert the changelist if I'm unable to reach the downstream reporting tool (due to a momentary network issue). It seems change-commit is too late for this. If my trigger using change-commit returns a non-zero value here, the trigger fails, but the changelist is still committed.
Is there a way to combine these 2 requirement
Once you get to the change-commit trigger, the change has been fully submitted, so no, you cannot halt the submit process at that point.
Frankly, halting a submit simply because a reporting tool is unavailable seems like an extreme response to a minor problem.
If the only thing wrong with the submit is that you couldn't reach the reporting service, perhaps you can record the changelist number in a queue locally, and have a retry mechanism that makes another attempt to notify the tool later.
This would make the two tools less tightly coupled.
A simple implementation would be to record "highest submitted changelist number successfully sent to the reporting tool" in the Perforce counters table using p4 counter, and then each time a change is submitted, check that counter, and send all the changelist numbers that had been submitted since the last time you successfully contacted the reporting tool.

State machine implementation on legacy system

I am very new to this subject, please let me know if you need more context.
I have an old legacy system with lots of complicated business logic/states that we are trying to extract and re-implement in a state machine.
Let's suppose I need to do something similar:
Save a User to the state "Approved"
Reject if the user data does not satisfy some conditions
Accept the change otherwise, save the User and send a notification
My understanding is that between 1. and 2. I need to call the state machine providing the new data (filled from a web form).
The state machine needs to get the current state from the database to understand what is the original state and verify if the conditions for switching the state to "Approved" are met.
Is my understanding right?
Thank you

Problem with final branch in a parallel activity

This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem.
Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit.
When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started".
I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow.
I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress.
This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.
my workflow scenario is to reassign task to it's originator after due date of the task expires, by firing a delay activity.
in my workflow I have a parallel replicator which is used to assign(create) different tasks to different users at the same time.Inside replicator I used a listen activity so in the left branch there is a OnTaskChanged activity+...+ completetask1, In the right branch of listenActivity there is a Delay Activity followed by a CompleteTask2 activity and a code activity to reassign task to task originator.I'm sure about the correlation tokens on both completetasks activities.every thing works fine on the left branch but error occurs in the right branch which contains Delay activity-->Completetask activity.
let consider that we have two tasks assigned to 2 users and they have one hour to complete their tasks, but they didn't.so Delay activity fires for both tasks.then in workflow first task will be completed but for the second task it makes error.
I think the problem is with taskid property of the completetask.it doesn't updated with the second task id, so it tries to complete a task which has been completed.

Resources