Acumatica Physical Inventory Process and Transferring between Locations - acumatica

We are starting to think about how we will utilize locations within warehouses to keep tighter tracking and control over where our items are physically located. In that context, we are trying to figure out what the actual workflow would be when it relates to performing physical inventory counts and review. I have read the documentation, but I'm wondering how to best think through the below scenario.
Let's say to start, that we have 10 serial items across 5 locations (so let's assume 2 in each location). And assume that all these locations are in the same warehouse.
2 weeks go by, and there is movement between these locations by way of the inventory transfer document process. But for this example, let's say that users didn’t perform the inventory transfer as they physically moved the items between locations 100% of the time.
So at this point, where acumatica thinks the serial items are doesn't reflect the reality of where they are.
So now we do Physical inventory for this warehouse (all 5 locations together).
By the time we complete the inventory count and review, we will see the 10 items in the same warehouse. BUT:
will see be able to see that variances/problems against the locations? Meaning, will it highlight/catch where they actual are located vs where acumatica thought they were located,?
and assuming yes, is there anything in the inventory process that will handle the auto transferring to it's correct location within the warehouse? Or does this need to then be done manually through an inventory transfer?
Any help would be much appreciated.
Thanks.

Related

In AnyLogic, how to create a changing, shared resource?

A fake example that emulates what I am trying to accomplish is the following:
I am emulating a restaurant that has employees.
There are three sinks for washing dishes.
The "Number of Employees" required to run the sinks uses this formula:
Number of Employees = (Number of Sinks) + 2
This is because there are some efficiencies in the workflow when multiple sinks are being used at the same time.
I have a resource for employees and an agent (population) to represent the 3 sinks.
When a new sink starts to get used, I check to see the "Number of Employees" currently "busy". I then calculate how many additional workers would be needed to be seized.
The problem I am running into is when a sink stops being used the "Number of Employees" required may change as well. I now need to release that many employees, in order to accurately reflect the required "Number of Employees". The "release" block; however, releases the entirety of the "seize" block. This may be more than the employees that should be released.
Is there an easier way to manage a changing shared resource?
Or if this is a good way, how do you manage the releasing of individual employees not the entire seize block?
What you say is not absolutely correct:
The "release" block; however, releases the entirety of the "seize"
block. This may be more than the employees that should be released.
If you go to the release block, you will see that you have the option to release a specific number of resources of a given pool. Moreover, that number is dynamic so it can be a function with the number of sinks as an input.
I think the trick is that you MAY need more than one release blocks that are consecutive, one for employees and one for sinks, depending on your model design.

Database join in the '60s with tape / punch cards only?

We are a large company, selling frobnication services to tens of thousands of customers via phone calls. Orders get recorded on punch cards, featuring
a customer ID
a date
the dollar amount of frobnication bought.
In order to process these into monthly bills to our users, we're ready to buy computing equipment modern enough for the '60s. I presume we're going to store our user database on a tape (... since... that's where you can store a lot of data with 60s tech, right?).
Sales record punch cards are coming in unsorted. Even if the records on tape are sorted by e.g. customer ID, doing one "seek" / lookup for each punch card / customer ID coming in (to update e.g. a "sum" amount) would be very slow. Meanwhile, if you have e.g. 256k of RAM (even less?), significant parts of the data set just won't fit.
My question is: how can this database operation be done in practice? Do you sort the punch cards first & then go through the tape linearly? How do you even sort punch cards? Or do you copy all of them to a tape first? Do you need multiple batch jobs to do all of this? How much of this is code we'll have to write vs. something that's coming with the OS?
(... yes I've heard about those fridge-size devices with spinning metal disks that can randomly seek many times a second; I don't think we'll be able to afford those.)
In the 60's you would most likely
You store your data in a Master-File sorted in Key sequence
Sort the Punch-Cards to a temporary Disk file.
Do a Master-File Update using the Temporary Disk file (transaction File) and the master file.
They might of used a Indexed-file or some Database (e.g. IMS) if online access is required.
Master File Update
For a Master File update both files need to be sorted in to the same sequence and you match on keys, it writes an updated master file using the details from the two. It Basically like a SQL Outer join.
Logic
Read Master-File
Read Transaction-file
While not eof-master-file and not eof-Transaction-file
if Transaction-file-key < Master-File-key
Write transaction-file details into updated-master-file
Read Transaction-file
else_if Transaction-file-key == Master-File-key
update Master-File-Record with Transaction-file-details
Write updated-master-file-record to updated-master-file
Read Transaction-file
else
Write master-file-record to updated-master-file
Read Master-File
end_if
end_while
Process Remaining Transaction-file records
Process Remaining Master-file records

GAMS : Avoid Scan of obviously wrong solutions in CPLEX

I have the following problem in GAMS
I implemented a location routing problem. While checking the .log file I noticed something that could speed up the calculation time immensly if I fixed it.
Let me state an example first:
Let's assume that we have a set S of all nodes consisting of s1*s140 nodes whereas nodes i1*i10 represent potential Warehouses and i11*i140 represent customers to be served. So we have have
Sets
i "all nodes" /i1*i40/
WH(i) "only potential Warehouse locations" /i1*i10/
K(i) "only customer sites" /i11*i140/
alias(i,j)
Binary Variables
z(WH) 1 if warehouse location WH is opened
y(K,WH) 1 if customer site K is assigned to warehouse WH
x(i,j) If node j is immediately headed to after node i.
Parameters
WHKAPA Capacity of a warehouse
d(K) Demand of a customer.
Cfix Opening Costs for a warehouse
dist(i,j)
The objective function minimizes the fixed opening costs and the routing costs.
While setting the capacity of a warehouse large enough to be able to serve all customers and setting high opening costs for each warehouse my assumption was that the optimal solution would consist of one warehouse being opened which serves all customers.
My assumption was right however I noticed that CPLEX takes a very long time to check the solution space for opening way to many Warehouses, first.
The optimality Gap then "jumps" to a near optimal solution when fewer Warehouses are opened (see attached screenshot). So basically a lot of time is spent scanning obviously "bad" solutions. Actually I consciously used examples where the obviously best solution would have to consist of one Warehouse only.
My question to you:
How can I "direc"t CPLEX to checkout solutions consisting of one Warehouse opened first without giving a maximal number of possible opened warehouses within the model (i. e. sum(WH, z(WH)) =l= 1 ; )
I tried Branching prioritys using the .prior suffix and the mipordind = 1 option. Cplex still checked solutions consisting of 10 Warehouses opened so I assume it did not help.
I also tried to set the Warehouse opening costs ridiculously high. However solutions that included opening the maximum number of possible warehouses were still checked and time lost.
Sorry for the long post
I hope I have put all necessary information in :)
Looking forward for your advice
Kind Regards
Adam

Cassandra database design - 1000 columns or dynamically created tables

I wanted to hear your advice about a potential solution for an advertise agency database.
We want to build a system that will be able to track users in a way that we know
what they did on the ads, and where.
There are many type of ads, and some of them also FORMS, so user can fill data.
Each form is different but we dont want to create table per form.
We thought of creating a very WIDE table with 1k columns, dozens for each type, and store the data.
In short:
Use Cassandra;
Create daily tables so data will be stored on a daily table;
Each table will have 1000 cols (100 for datetime, 100 for int, etc).
Application logic will map the data into relevant cols so we will be able to search and update those later.
What do you think of this ?
Be careful with generating tables dynamically in Cassandra. You will start to have problems when you have too many tables because there is a per table memory overhead. Per Jonathan Ellis:
Cassandra will reserve a minimum of 1MB for each CF's memtable: http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-performance
Even daily tables are not a good idea in Cassandra (tables per form is even worse). I recommend you build a table that can hold all your data and you know will scale well -- verify this with cassandra-stress.
At this point, heed mikea's advice and start thinking about your access patterns (see Patrick's video series), you may have to build additional tables to meet your querying needs.
Note: For anyone wishing for a schemaless option in c*:
https://blog.compose.io/schema-less-is-usually-a-lie/
http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/

Cassandra - multiple counters based on timeframe

I am building an application and using Cassandra as my datastore. In the app, I need to track event counts per user, per event source, and need to query the counts for different windows of time. For example, some possible queries could be:
Get all events for user A for the last week.
Get all events for all users for yesterday where the event source is source S.
Get all events for the last month.
Low latency reads are my biggest concern here. From my research, the best way I can think to implement this is a different counter tables for each each permutation of source, user, and predefined time. For example, create a count_by_source_and_user table, where the partition key is a combination of source and user ID, and then create a count_by_user table for just the user counts.
This seems messy. What's the best way to do this, or could you point towards some good examples of modeling these types of problems in Cassandra?
You are right. If latency is your main concern, and it should be if you have already chosen Cassandra, you need to create a table for each of your queries. This is the recommended way to use Cassandra: optimize for read and don't worry about redundant storage. And since within every table data is stored sequentially according to the index, then you cannot index a table in more than one way (as you would with a relational DB). I hope this helps. Look for the "Data Modeling" presentation that is usually given in "Cassandra Day" events. You may find it on "Planet Cassandra" or John Haddad's blog.

Resources