We have several instances where serial items are stuck "In Transit". This is likely due to a bug where we are able to perform the first step of a 2 step inventory transfer while the same item is technically still in a production job through the JAMs manufacturing process. But since it's tied up in a job, it's unable to be received on the other end. So the item is then stuck in transit. Even if the actual item can be resolved, the other items on that transfer can't be received as a result either.
Usually when we have issues with warehouse locations, we just do a 1 step transfer to get it to be in the correct warehouse, but there is no option to transfer from "In Transit" to the correct final warehouse.
This is less about the bugs/issues that caused it, and more of a general question about how to force an item out of In Transit and into the correct warehouse.
We are on 2017 r2. Hoping someone has some advice as to how we can rectify these situations (even if we have to go into the db to do so).
Thanks.
Related
A bit of context: my Azure Synapse pipeline makes a GET Request to a REST API in order to import data to the Data Lake (ADLSGen2) in parquet file format.
I am looking forward to requesting data to the API on an hourly basis in order to get information of the previous hour. I have also considered to set the trigger to run every half an hour to get the data of the previous 30 minutes.
The thing is: this last GET request and Copy Data debug took a bit less than 20 minutes. The DUI used was set in "Auto", and it equals 4 even if I set it manually to 8 on the activity settings.
I was wondering if there are any useful suggestions to make a Copy Data activity work faster, whatever the cost may be (I would really like info about it, if you consider it pertinent).
Thanks in advance!
Mateo
You need to check which part is running slow.
You can click on the glasses icon to see the copy data details.
If the latency is on "Time to first byte" or "Reading from source" the issue is on the REST API side.
If the latency is on "Writing to sink" the problem may be from writing to data lake.
If the issue is on the API side, try to contact the provider. Another option, if applicable, is to use a few copy data activities, each will copy a part of the data.
If the issue is on data lake, you should check the setting on the sink side.
We are starting to think about how we will utilize locations within warehouses to keep tighter tracking and control over where our items are physically located. In that context, we are trying to figure out what the actual workflow would be when it relates to performing physical inventory counts and review. I have read the documentation, but I'm wondering how to best think through the below scenario.
Let's say to start, that we have 10 serial items across 5 locations (so let's assume 2 in each location). And assume that all these locations are in the same warehouse.
2 weeks go by, and there is movement between these locations by way of the inventory transfer document process. But for this example, let's say that users didn’t perform the inventory transfer as they physically moved the items between locations 100% of the time.
So at this point, where acumatica thinks the serial items are doesn't reflect the reality of where they are.
So now we do Physical inventory for this warehouse (all 5 locations together).
By the time we complete the inventory count and review, we will see the 10 items in the same warehouse. BUT:
will see be able to see that variances/problems against the locations? Meaning, will it highlight/catch where they actual are located vs where acumatica thought they were located,?
and assuming yes, is there anything in the inventory process that will handle the auto transferring to it's correct location within the warehouse? Or does this need to then be done manually through an inventory transfer?
Any help would be much appreciated.
Thanks.
I'm trying to get the information out of gitlab, when I changed the iteration of a an issue. Meaning: "when I moved a ticket from Sprint 5 to Sprint 6".
I tried over the API, graphql, database... Every solution/help would be really appreciated. Even just telling me in which table it is stored would be helpful.
I know there is a field of iteration in the issue table and also in the queries, but I need the historical information. Meaning I want to know if a ticket moved from Sprint 1 to 2 to 3 etc...
Finally found the solution:
GET /projects/:id/issues/:issue_iid/resource_iteration_events
see also
I have the following problem in GAMS
I implemented a location routing problem. While checking the .log file I noticed something that could speed up the calculation time immensly if I fixed it.
Let me state an example first:
Let's assume that we have a set S of all nodes consisting of s1*s140 nodes whereas nodes i1*i10 represent potential Warehouses and i11*i140 represent customers to be served. So we have have
Sets
i "all nodes" /i1*i40/
WH(i) "only potential Warehouse locations" /i1*i10/
K(i) "only customer sites" /i11*i140/
alias(i,j)
Binary Variables
z(WH) 1 if warehouse location WH is opened
y(K,WH) 1 if customer site K is assigned to warehouse WH
x(i,j) If node j is immediately headed to after node i.
Parameters
WHKAPA Capacity of a warehouse
d(K) Demand of a customer.
Cfix Opening Costs for a warehouse
dist(i,j)
The objective function minimizes the fixed opening costs and the routing costs.
While setting the capacity of a warehouse large enough to be able to serve all customers and setting high opening costs for each warehouse my assumption was that the optimal solution would consist of one warehouse being opened which serves all customers.
My assumption was right however I noticed that CPLEX takes a very long time to check the solution space for opening way to many Warehouses, first.
The optimality Gap then "jumps" to a near optimal solution when fewer Warehouses are opened (see attached screenshot). So basically a lot of time is spent scanning obviously "bad" solutions. Actually I consciously used examples where the obviously best solution would have to consist of one Warehouse only.
My question to you:
How can I "direc"t CPLEX to checkout solutions consisting of one Warehouse opened first without giving a maximal number of possible opened warehouses within the model (i. e. sum(WH, z(WH)) =l= 1 ; )
I tried Branching prioritys using the .prior suffix and the mipordind = 1 option. Cplex still checked solutions consisting of 10 Warehouses opened so I assume it did not help.
I also tried to set the Warehouse opening costs ridiculously high. However solutions that included opening the maximum number of possible warehouses were still checked and time lost.
Sorry for the long post
I hope I have put all necessary information in :)
Looking forward for your advice
Kind Regards
Adam
I've read through this excellent feedback on Azure Search. However, I have to be a bit more explicit in questioning one the answers to question #1 from that list...
...When you index data, it is not available for querying immediately.
...Currently there is no mechanism to control concurrent updates to the same document in an index.
Eventual consistency is fine - I perform a few updates and eventually I will see my updates on read/query.
However, no guarantee on ordering of updates is really problematic. Perhaps I'm misunderstanding Let's assume this basic scenario:
1) update index entry E.fieldX w/ foo at time 12:00:01
2) update index entry E.fieldX w/ bar at time 12:00:02
From what I gather, it's entirely possible that E.fieldX will contain "foo" after all updates have been processed?
If that is true, it seems to severely limit the applicability of this product.
Currently, Azure Search does not provide document-level optimistic concurrency, primarily because overwhelming majority of scenarios don't require it. Please vote for External Version UserVoice suggestion to help us prioritize this ask.
One way to manage data ingress concurrency today is to use Azure Search indexers. Indexers guarantee that they will process only the current version of a source document at each point of time, removing potential for races.
Ordering is unknown if you issue multiple concurrent requests, since you cannot predict in which order they'll reach the server.
If you issue indexing batches in sequence (that is, start the second batch only after you saw an ACK from the service from the first batch) you shouldn't see reordering.