Supply Demand Modeling - economics

I thought I would ask the SO community on helping me with a project that I am currently working on. I need to model the price for a widget in a market situation. The price for the widget should be a result from the current supply and demand. Users will be able to buy and sell the widget at the fixed price. As users buy the widget the demand will go up along with the price. Conversely as users sell the widget the supply will go up and the price will go down. The quantity and current price of the widget will be stored in a database along with the total number of buys and sells for the widget.
Protrade.com has an excellent example of buying and trading widgets (players and teams), I would want to model my system in a similar fashion.
Is there any good programming libraries that will accurately model a market based on supply and demand?

Unfortunately I do not know of any libraries, but perhaps you can tap into Excel's statistics functions.
My opinion follows.
This is why economics is so boring, everything is supply/demand.
Something along the lines of the following should work as a start:
ListPrice = (Cost + Profit) * (demand/supply * economic-factor)
where economic-factor is some determined constant.
If you have some historical data, eg daily supply/demand ratio's you could factor it in, perhaps using some time-based scale.

Related

Is an order something transient or not

In my company (train company) there is a sort of battle going on over two viewpoints on something. Before going to deep into the problem I'm first going to explain the different domains we have in our landscape now.
Product: All product master data and their characteristics.
Think their name, their possible list of choices...
Location: All location master data that can be chosen, like stations, stops, etc.
Quote: To get a price for a specific choice of a product with their attributes.
Order: The order domain where you can make a positive order but also a negative one for reimbursements.
Ticket: This is essentially what you get from paying the order. Its the product but in the state that its at, when gotten by the customer.
The problem
Viewpoint PURPLE (I don't want to create bias)
When an order is transformed into all "tickets", we convert the order details, like price, into the ticket model. In order to make Order something we can throw away. Order is seen as something transient. Kind of like the bag you have in a supermarket. Its the goods inside the bag that matter. Not the bag itself.
When a reimburse flow would start. You do not need to go to the order. You would have everything in the Ticket domain. So this means data from order will be duplicated to Ticket.
But not all, only the things that are relevant. Like price for example.
Viewpoint YELLOW (I don't want to create bias)
You do the same as above but you do not store the price in Ticket domain. The ticket domain only consist of details that are relevant for the "ticket" to work. Price is not allowed in there cause its a thing of the order. When a reimburse flow would start, its allowed to go fetch those details from the order. Making order not something you can throw away as its having crucial data inside of it.
The benefit here is that Order is not "polluting" the Ticket with unnecessary data. But this is debatable. The example of the price is a good example.
I wish to know your ideas about these two viewpoints.
There is no "Don't repeat yourself" when it comes to the business domain. The only thing that dictates the business domain is the business requirements. If the requirements state that the ticket should work independent of the order changes, then you have to duplicate things.
But in this case, the requirements are ambiguous. There is no correct design using the currently specified requirements. Building code based on assumptions is the #1 way of getting bad code, since you most likely will have to do a redesign down the road.
You need to go back to the product owner and ask him about the difference between the Order and the Ticket.
For instance:
What should happen to the ticket if the order is deleted?
What happens to the order and/or ticket if the product price changes?
What happens to a ticket if the order is reimbursed?
Go back, get better requirements and then start to design the application.

Maximo Crew Type Quantity from JOBLABOR to WPLABOR

if CrewType is chosen on JobPlan, Maximo's default behaviour is to explicitly set Quantity to 1 and make it read-only. I have changed that behaviour in JOBLABOR, and now I can edit the Quantity on JobPlan. (This has been done via an attribute launch point automation script on JOBLABOR.AMCREWTYPE).
However, when the JobPlan is applied to a WO, it still explicitly sets the QUANTITY for CrewType as 1 on WPLABOR, thus not carrying across the quantity from JOBLABOR. Where can i override that behaviour? Could this be done via an automation script for the run action of JPNUM field?
As to why the quantity field defaults back to 1 and is made read-only if Crew Type or Crew are selected, I would expect that's because conceptually, it doesn't make sense to have a quantity other than one for these types of job plan labour. The same is true of assigning labour records to a Job Plan because it doesn't make sense to assign a quantity greater than one of the same person's labour record. In fact the only type of job plan labour you're able to adjust the quantity for in Maximo out of the box is craft because you could justifiably require multiples of a particular craft (e.g. 3 electricians) assigned to the job plan.
A crew type is a template for a crew which is itself the labour (internal or otherwise) with particular crafts, skill levels and qualifications to perform the work. Since each Crew is distinct and made up of group of individuals assigned to the positions in the crew I don't think it makes sense to say you want to assign 2 instances of the same crew, say Crew A, to a particular job plan. If, for example, two instances of the same type of team are required to perform the work you probably need to define two distinctly separate crews of the same crew type and add each crew (not Crew Type) to the Job Plan rather than customise the system to allow you to add a crew type with a quantity greater than 1.
Alternatively you could assign crafts to the job plan which you then have the option of specifying that the job plan requires, say 5 electricians, for example for 6 hours.
From Administration -> Organisations you can select an Organisation then click Crew Assignment Options to define how total work hours are calculated for Work Orders, Job Plans, Tasks and Activities associated with Crews.
I can see why you would want to plan for n crews of some type. For example, it takes a crew of a certain type to stand a pole for street lights, and if you've got a whole street to put lights on, why not have 5 of that crew type planned on this work order?
Unfortunately, it seems Maximo is pretty set against doing that. For what explanation it may yield, I recommend opening a Support Case with IBM, asking for the rationale or how to work around that limitation. And you could submit an RFE with rationale for why the limitation should not be there.
To satisfy the immediate need, maybe you could set up crew types as a service? Or create your own Quantity attributes for use with Crew Type and implement all the logic around those? I don't like reinventing the wheel, but "the juice might be worth the squeeze" if you're going to be fighting Maximo every step of the way, anyway. Or, if it really means that much to you, customize this aspect of Maximo to do what you want.
Whatever happens, I think your problem is not a programming problem but a Maximo problem. And as StackOverflow is a programming site, your problem is beyond the scope of this site. However, please come back if you have a programming problem! We love solving those!
Update
Having thought more about this, I think this is not a Maximo problem. I think Maximo is just (you could argue, forcibly) encouraging better planning. In the street light example, you wouldn't expect n crews to self-organize to stand poles. That would be poor planning. Rather, you would have a top level work order for doing the street with a lower level work order for each individual pole. So, each of those lower level work orders would only require 1 of the Crew Type. This arrangement would still flow through to scheduling as needing n of that Crew Type, but whether 1 Crew of that type was used n times or whether n crews were used once or somewhere in between is a scheduling problem, not a planning problem.

Is my database design consistent with RDMS

I am working on my website where I sell concert tickets.
I am working on designing the part of the website where I generate tickets based on seat and rows available.
After some thinking and drawing I have to the conclusion that this design would be best for my problem.
I was wondering is this poor design or are there any improvements that I can make?
Thank you
I wouldn't expect to have a table of unbooked seats. A table of bookings seems more logical. Your concerts table looks questionable if you expect to have a series of dates for the same concert.
Perhaps you should first sketch out the key functions of your site as User Stories or Use Cases and list out the required attributes for each. That could give you a better set of requirements for your database design, e.g. what customer attributes; what about seat attributes such as restricted view, standing places or accessible places for the disabled.

Find most significant/informative review or create new review

Given a list of reviews (10000+), such as:
has great pizzas, price is low and customer service is average
Customer service was horrible, there was a long wait during lunch, food was ok
has amazing pizzas and I highly recommend it, they also have deals/specials weekly. Very upscale, and the atmosphere is great
etc.
The goal is to find the most significant reviews (around 20) out of all. The review should encapsulate as much information about the merchant as possible. (Food satisfaction, Price, Wait Time, etc)
I have been looking at some ways of doing this, chunking/collocation/idf but not sure if any of them are viable.
You can do a multi-label classification task for each review, then:
You can retrieve the reviews with more tags (order by count(tags) desc)
Give weight (positive or negative) to the labels and retrive the rewiews with max(sum(weight)) ORDER DESC
in both cases you can exclude labels or reviews with certain labels

1 vs 1 vote: calculate ratings (Flickchart.com)

Instead of rating items with grades from 1 to 10, I would like to have 1 vs 1 "fights". Two items are displayed beside each other and you pick the one which you like more. Based on these "fight" results, an algorithm should calculate ratings for each item.
You can see this approach on Flickchart.com where movies are rated using this approach.
It looks like this:
As you can see, items are pushed upwards if they win a "fight". The ranking is always changing based on the "fight" results. But this can't be only based on the win quote (here 54%) since it's harder to win against "Titanic" than against "25th Hour" or so.
There are a few things which are quite unclear for me:
- How are the ratings calculated? How do you decide which film is on the first place in the ranking? You have to consider how often an items wins and how good are the beaten items.
- How to choose which items have a "fight"?
Of course, you can't tell me how Flickchart exactly does this all. But maybe you can tell me how it could be done. Thanks in advance!
This might not be exactly what flickchart is doing, but you could use a variant of the ELO algorithm used in chess (and other sports), since these are essentially fights/games that they win/lose.
Basically, all movies start off with 0 wins/losses and every time they get a win they get a certain amount of points. You usually have an average around 20 (but any number will do) and winning against a movie with the same rating as yourself will give exactly that 20. Winning against a bad movie will maybe give around 10 points, while winning against a better movie might give you 30 points. The other way around, losing to a good movie you only lose 10 points, but if you lose to a bad movie, you lose 30 points.
The specifics of the algorithm is in the wikipedia link.
How are the ratings calculated? How do you decide which film is on the first place in the ranking? You have to consider how often an items wins and how good are the beaten items.
What you want is a weighted rating, also called a Bayesian estimate.
I think IMDB's Top 250 movies is a better starting point to make a ranking website. Some movies have 300,000+ votes while others others have fewer than 50,000. IMDB uses a Bayesian estimate to rank movies against one another without unfairly weighting popular movies. The algorithm is given at the bottom of the page:
weighted rating (WR) = (v ÷ (v+m)) × R
+ (m ÷ (v+m)) × C where:
R = average for the movie (mean) =
(Rating)
v = number of votes for the
movie = (votes)
m = minimum votes
required to be listed in the Top 250
(currently 3000)
C = the mean vote
across the whole report (currently
6.9)
for the Top 250, only votes from
regular voters are considered.
I don't know how IMDB chose 3000 as their minimum vote. They could have chosen 1000 or 10000, and the list would have been more or less the same. Maybe they're using "average number of votes after 6 weeks in the box office" or maybe they're using trial and error.
In any case, it doesn't really matter. The formula above is pretty much the standard for normalizing votes on ranking websites, and I'm almost certain Flickrchart uses something similar in the background.
The formula works so well because it "pulls" ratings toward the mean, so ratings above the mean are slightly decreased, ratings below the mean are slightly increased. However, the strength of the pull is inversely proportional to the number of votes a movie has. So movies with few votes are pulled more aggressively toward the mean than movies with lots of votes. Here are two data points to demonstrate the property:
Rank Movie Votes Avg Rating Weighted Rating
---- ----- ----- ---------- ---------------
219 La Strada 15,000+ 8.2 8.0
221 Pirates of the 210,000+ 8.0 8.0
Caribbean 2
Both movies' ratings are pulled down, but the pull on La Strada is more dramatic since it has fewer votes and therefore is not as representative as ratings for PotC.
For your specific case, you have two items in a "fight". You should probably design your table as follows:
Items
-----
ItemID (pk)
FightsWon (int)
FightsEngaged (int)
The average rating is FightsWon / FightsEngaged. The weighted rating is calculated using the formula above.
When a user chooses a winner in a fight, increase the winning item's FightsWon field by 1, increase both items FightsEngaged field by 1.
Hope this helps!
- Juliet
I've been toying with the problem of ranking items by means of pair-wise comparison for some time myself, and wanted to take the time to describe the ideas I came up with so far.
For now I'm simply sorting by <fights won> / <total fights>, highest first. This works fine if you're the only one voting, or if there are a lot of people voting. Otherwise it can quickly become inaccurate.
One problem here is how to choose which two items should fight. One thing that does seem to work well (subjectively) is to let the item that has the least fights so far, fight against a random item. This leads to a relatively uniform number of fights for the items (-> accuracy), at the cost of possibly being boring for the voter(s). They will often be comparing the newest item against something else, which is kinda boring. To alleviate that, you can choose the n items with the lowest fight-count and chose one of those randomly as the first contender.
You mentioned that you want to make victories against strong opponents count more than against weak ones. As mentioned in other posts above, rating systems used for chess and the like (Elo, Glicko) may work. Personally I would love to use Microsoft's TrueSkill, as it seems to be the most accurate and also provides a good way to pick two items to pit against each other -- the ones with the highest draw-probability as calculated by TrueSkill. But alas, my math understanding is not good enough to really understand and implement the details of the system, and it may be subject to licensing fees anyway...
Collective Choice: Competitive Ranking Systems has a nice overview of a few different rating systems if you need more information/inspiration.
Other than rating systems, you could also try various simple ladder systems. One example:
Randomize the list of items, so they are ranked 1 to n
Pick two items at random and let them fight
If the winner is ranked above the loser: Do nothing
If the loser is ranked above the winner:
If the loser is directly above the winner: Swap them
Else: Move the winner up the ladder x% toward the loser of the fight.
Goto 2
This is relatively unstable in the beginning, but should improve over time. It never ceases to fluctuate though.
Hope I could help at least a little.
As for flickchart, I've been playing around with it a little bit, and I think the rating system is pretty unsophisticated. In pseudo-code, my guess is that it looks something like this:
if rank(loser) == null and rank(winner) == null
insert loser at position estimated from global rank
insert winner at position estimated from global rank
else if rank(winner) == null or rank(winner) < rank(loser)
then advance winner to loser's position and demote loser and all following by 1
Why do I think this? First, I'm completely convinced that their Bayesian priors are not based on a careful mining of my previous choices. They seem to have no way to guess that because I like Return of the Jedi that I like The Empire Strikes Back. In fact, they can't figure out that because I've seen Home Alone 2 that I may have seen Home Alone 1. After hundreds of ratings, the choice hasn't come up.
Second of all, if you look at the above code you might find a little bug, which you will definitely notice on the site. You may notice that sometimes you will make a choice and the winner will slide by one. This seems to only happen when the loser wasn't previously added. My guess is that what is happening is that the loser is being added higher than the winner.
Other than that, you will notice that rankings do not change at all unless a lower ranked movie beats a higher ranked movie directly. I don't think any real scores are being kept: the site seems to be entirely memoryless except for the ordinal rank of each movie and your most recent rating.
Or you might want to use a variant of PageRank see prof. Wilf's cool description.
After having thought things through, the best solution for this film ranking is as follows.
Required data:
The number of votes taken on each pairing of films.
And also a sorted version of this data grouped like in radix sort
How many times each film was voted for in each pairing of films
Optional data:
How many times each film has been involved in a vote for each user
How to select a vote for a user:
Pick out a vote selection from the sorted list in the lowest used radix group (randomly)
Optional: use the user's personal voting stats to filter out films they've been asked to vote on too many times, possibly moving onto higher radix buckets if there's nothing suitable.
How to calculate the ranking score for a film:
Start the score at 0
Go through each other film in the system
Add voteswon / votestaken versus this film to the score
If no votes have been taken between these two films, add 0.5 instead (This is of course assuming you want new films to start out as average in the rankings)
Note: The optional stuff is just there to stop the user getting bored, but may be useful for other statistics also, especially if you include how many times they voted for that film over another.
Making sure that newly added films have statistics colleted on them ASAP and very evenly distributed votes across all existing films is vital to keeping stats correct for the rest of the films. It may be worth staggering the entry of a bunch of new films to the system to avoid temporary glitches in the rankings (though not immediate nor severe).
===THIS IS THE ORIGINAL ANSWER===
The problem is actually very easy. I am assuming here that you want to order by preference to vote for the film i.e. the #1 ranked film is the film that is most likely to be chosen in the vote. If you make it so that in each vote, you choose two films completely at random you can calculate this with simple maths.
Firstly each selection of two films to vote on is equally likely, so results from each vote can just be added together for a score (saves multiplying by 1/nC2 on everything). And obviously the probability of someone voting for one specific film against another specific film is just votesforthisfilm / numberofvotes.
So to calculate the score for one film, you just sum votesforthisfilm / numberofvotes for every film it can be matched against.
There is a little trouble here if you add a new film which hasn't had a considerable number of votes against all the other films, so you probably want to leave it out of the rankings until a number of votes has built up.
===WHAT FOLLOWS IS MOSTLY WRONG AND IS MAINLY HERE FOR HISTORICAL CONTEXT===
This scoring method is derived from a Markov chain of your voting system, assuming that all possible vote questions were equally likely. [This first sentence is wrong because making all vote questions have to be equally likely in the Markov chain to get meaningful results] Of course, this is not the case, and actually you can fix this as well, since you know how likely each vote question was, it's just the number of votes that have been done on that question! [The probability of getting a particular vote question is actually irrelevant so this doesn't help] In this way, using the same graph but with the edges weighted by votes done...
Probability of getting each film given that it was included in the vote is the same as probability of getting each film and it being in the vote divided by the probability it was included in the vote. This comes to sumoverallvotes((votesforthisfilm / numberofvotes) * numberofvotes) / totalnumberofvotes divided by sumoverallvotes(numberofvotes) / totalnumberofvotes. With much cancelling this comes to votesforthisfilmoverallvotes / numberofvotesinvolvingthisfilm. Which is really simple!
http://en.wikipedia.org/wiki/Maximize_Affirmed_Majorities?
(Or the BestThing voting algorithm, originally called the VeryBlindDate voting algorithm)
I believe this kind of 1 vs. 1 scenario might be a type of conjoint analysis called Discrete Choice. I see these fairly often in web surveys for market research. The customer is generally asked to choose between two+ different sets of features that they would prefer the most. Unfortunately it is fairly complicated (for a non-statistics guy like myself) so you may have difficulty understanding it.
I heartily recommend the book Programming Collective Intelligence for all sorts of interesting algorithms and data analysis along these lines.

Resources