I'm trying to get the information out of gitlab, when I changed the iteration of a an issue. Meaning: "when I moved a ticket from Sprint 5 to Sprint 6".
I tried over the API, graphql, database... Every solution/help would be really appreciated. Even just telling me in which table it is stored would be helpful.
I know there is a field of iteration in the issue table and also in the queries, but I need the historical information. Meaning I want to know if a ticket moved from Sprint 1 to 2 to 3 etc...
Finally found the solution:
GET /projects/:id/issues/:issue_iid/resource_iteration_events
see also
Related
I was wondering what the best way to delete my collection (which has subcollections) in firestore is. I need to delete the entire collection (using code such as this https://firebase.google.com/docs/firestore/solutions/delete-collections) every day at 20:00 UTC.
My concern is that users will be able to query/write documents to the collection/sub-collection that is being deleted. If they try to read/update/delete a document in the collection while the batch delete is running will this cause any problems?
I have thought of somehow writing some firestore rules that blocks reads if the query time is 20:00 - 20:05 UTC but it seems a bit hacky and I am not sure if it's even possible.
Could anyone provide me some assistance with how to handle potential reads at the same time as the batch delete.
Thanks a lot
Side note : In the delete collections code it mentions a token that is required functions.config().fb.token. Is this always the same If the code is running on cloud functions?
There are two main scenarios I can think of here:
Retry deleting the collection after the first pass, to get any documents created while your code was deleting.
Block the users from writing, with a global lock in security rules.
Even if you do the second, I'd still also do the first - as it's very easy to miss a write when there are enough users.
We have several instances where serial items are stuck "In Transit". This is likely due to a bug where we are able to perform the first step of a 2 step inventory transfer while the same item is technically still in a production job through the JAMs manufacturing process. But since it's tied up in a job, it's unable to be received on the other end. So the item is then stuck in transit. Even if the actual item can be resolved, the other items on that transfer can't be received as a result either.
Usually when we have issues with warehouse locations, we just do a 1 step transfer to get it to be in the correct warehouse, but there is no option to transfer from "In Transit" to the correct final warehouse.
This is less about the bugs/issues that caused it, and more of a general question about how to force an item out of In Transit and into the correct warehouse.
We are on 2017 r2. Hoping someone has some advice as to how we can rectify these situations (even if we have to go into the db to do so).
Thanks.
I know it might be a bit a confusing title but couldn't get up to anythig better.
The problem ...
I have a ADF Pipeline with 3 Activities, first a Copy to a DB, then 2 times a Stored procedure. All are triggered by day and use a WindowEnd to read the right directory or pass a data to the SP.
There is no way I can get a import-date into the XML files that we are receiving.
So i'm trying to add it in the first SP.
Problem is that once the first action from the pipeline is done 2 others are started.
The 2nd action in the same slice, being the SP that adds the dates, but in case history is loaded the same Pipeline starts again a copy for another slice.
So i'm getting mixed up data.
As you can see in the 'Last Attempt Start'.
Anybody has a idea on how to avoid this ?
ADF Monitoring
In case somebody hits a similar problem..
I've solved the problem by working with daily named tables.
each slice puts its data into a staging table with a _YYYYMMDD after, can be set as"tableName": "$$Text.Format('[stg].[filesin_1_{0:yyyyMMdd}]', SliceEnd)".
So now there is never a problem anymore of parallelism.
The only disadvantage is that the SP's coming after this first have to work with Dynamic SQL as the table name where they are selecting from is variable.
But that wasn't a big coding problem.
Works like a charm !
I'm trying to solve an issue with Microsoft PowerApps where you are limited to storing only 5 values in a collection, I have been looking around for a while now to find a solution.
What I am essentially trying to do is create an offline issue logger from a tablet, where users will sync their devices to a database to retrieve all existing entries. They will then go offline to a customer site and take pictures and log issues offline to then sync when they get back to the office.
With this issue persisting, I cannot import more than 5 issues from the database and I cannot log more than 5 issues to then upload to the database.
I have gone through the documentation a few times now trying to find anything stating whether the storage is limited or not. I haven't been successful.
Tutorials such as : https://powerapps.microsoft.com/en-us/tutorials/show-images-text-gallery-sort-filter/ show that they are adding 5 products to work with, but that is the only mention of data in a collection.
Is anyone else experiencing the same issue? Or could anyone suggest a way around this?
Thank you
Update: The Collection view under File > Collections only shows 5 items in the table.
If you create a dropdown of the data, all the values are saved offline.
By default a collection can hold up to 500 entries, if you need more entries than this you can create code to expand the limit. If you go to File > Collections, again it only shows 5 items as an example of the data. This is replicated in the tutorials and can lead you to believe that 5 is the maximum number of items you can store.
I understand that a multi update will not be atomic and there are chances that the update query may fail during the process. The error can be found via getLastError. But what if I want to roll back the updates that have already been done in that particular query? Any method simpler than the tedious two phase commits?
For instance, let me say I have a simple collection of some users and their phone models. Now I do a multi update on all the users who have a Nexus 4 to Nexus 5 (Dreamy, isn't it?). The only condition being, all or none - so all the N5s are taken back if even one N4 user doesn't get his. Now, somehow the mongodb fails in between, and I am stuck with a few users with N4 and a few with N5. Now, as I have gathered from the net is that I can't have mongo roll back directly. If the operation failed, I will have to do a manual update of the N5 users back to N4. And if that fails too somehow, keep repeating it.
Or when I have a complicated collection, I will have to keep a new key viz. Status and update it with keywords updating/ updated.
This is what I understand. I wanted to know if there is any simpler way. I assume from the comments the answer is a big no.