Could anyone tell me who is the inventors of Copying GC and Generation GC?
The inventor of Copying algorithm is Marvin L. Minsky, and the Generation algorithm's are Henry Lieberman and Carl Hewitt.
Related
Blocksworld is apparently a benchmark domain in automated planning.
This domain consists of a set of blocks, a table and a robot hand.
The blocks can be on top of other blocks or on the table;
a block that has nothing on it is clear;
and the robot hand can hold one block or be empty.
The goal is to find a plan to move from one configuration of blocks to another.
Can someone explain what makes this a non trivial problem? I can't think of a problem instance where the solution is not trivial (e.g. build desired towers bottom-up one block at a time).
There's a historical and two practical reasons for Blocks World being a benchmark of interest.
The historical one is that Blocks World was used to illustrate the so-called Sussman's Anomaly. It has no longer any scientific relevance, but it was used to illustrate the limitations and challenges of planning algorithms that approach the problem of planning as that of searching through the space of plans directly. The link points to a chapter of the following book, which is a good introduction into Automated Planning
Automated Planning and Acting
Malik Ghallab, Dana Nau, Paolo Traverso
Cambridge University Press
It used to be the case, especially back in the mid 1990s, when SAT solving really took off, that it was an example of how limited was the state of the art in Automated Planning back in the day.
As you write in your question, solving Blocks World is easy: the algorithm you sketch is well known and is clearly in polynomial time. Finding an optimal plan, though, is not easy. I refer you to this excellent book
Understanding Planning Tasks: Domain Complexity and Heuristic Decomposition
Malte Helmert
Springer, 2006
or his shorter, classic paper
Complexity results for standard benchmark domains in planning
Malte Helmert
Artificial Intelligence, 2003
The second "practical" reason for the relevance of Blocks World is that, even being a "simple" problem, it can defeat planning heuristics and elaborate algorithms or compilations to other computational frameworks such as SAT or SMT.
For instance, it wasn't until relatively recently (2012) that Jussi Rintanen showed good performance on that "simple" benchmark after heavily modifying standard SAT solvers
Planning as satisfiability: Heuristics
Jussi Rintanen
Artificial Intelligence, 2012
by compiling into them heuristics as clauses that the combination of unit propagation, clause learning and variable selection heuristics can exploit to obtain deductive lower bounds quickly.
EDIT: Further details on the remark above optimal planning for blocks not being easy have been requested. From the references provided, this paper
On the complexity of blocks-world planning
Naresh Gupta and Dana S. Nau
Artificial Intelligence, 1992
has the original proof, reducing the problem of computing optimal plans for Blocks World to HITTING-SET (one of the Karp's NP-hard problems).
An easier to access paper, which looks quite deep into planning in the Blocks World domain is
Blocks World revisited
John Slaney, Sylvie Thiébaux
Artificial Intelligence, 2001
Figure 1 in the paper above shows an example of an instance that illustrates the intuition behind Gupta and Nau's complexity proof.
Another Blocksworld-related paper that I found quite interesting is How Good is Almost Perfect? (AAAI 2008) by Helmert and Röger.
It showed that even when using an almost perfect heuristic (a heuristic which is, for every possible state, wrong by only a constant) A* search is bound to produce an exponentially large search space. (This shows that even with almost perfect information about the goal distance, search will still get lost in the search space in this domain.)
Looking at Java/OpenJDK, it seems that every “new” garbage collection implementation is roughly one magnitude larger than the preceding one.
What are the sizes of Garbage Collector implementations in other runtimes like the CLR?
Can a non-trivial improvement in garbage collection be equated with a sharp increase in the size/complexity of the implementation?
Going further, can this observation be made about Garbage Collector implementations in general or are there certain design decisions (in Java or elsewhere) which especially foster or mandate these size increases?
Really interesting question... really broad but I'll try my best to give decent input.
Going further, can this observation be made about Garbage Collector implementations in general or are there certain design decisions (in Java or elsewhere) which especially foster or mandate these size increases?
Well java's garbage collector initially didn't support generations, so adding this feature made it grow in size. One other thing that adds to the size/complexity of the jvm garbage collector is its configuration. The user can tweak the gc in a number of ways which increases the complexity. See this doc if you really want to know all the tun-able features of the jvm garbage collector http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
This stackoverflow answer goes into this in a bit more depth https://stackoverflow.com/a/492821/25981
As to comparing size vs features...
Here is a very simple garbage collector for C:
https://code.google.com/p/dpgc/
It has very few features and even requires the user to mark blocks as references are shared. It's size is very small weighing in at one C file and one header file.
Compare this to a fully featured gc as used in the .net framework. Below I've included a bunch of talks with the two architects of the .net garbage collector.
http://channel9.msdn.com/Tags/garbage+collector
Specifcally this link: http://channel9.msdn.com/Shows/Going+Deep/Patrick-Dussud-Garbage-Collection-Past-Present-and-Future they discuss the evolution of the .net gc both interns of featuers and complexity( which is related to lines of code.)
I'm looking for a historical overview of computer graphics developments, a timeline of such things as
bump mapping
bloom
stencil buffer shadows
volumetric fog
subsurface scattering
radiosity
etc, the more inclusive the better
according to when they were invented and when they became practical for real-time mainstream use.
Hopefully this research would include an analysis of how much the end result has been improved by the invention of new techniques versus better algorithms for old techniques versus simply applying them more extensively given improving hardware.
Got any links for this sort of thing? Thanks.
Have you checked the Wikipedia article on Rendering? It has a list o influential articles on the subject, and their years of publication.
I'm working with just the basics of garbage collection and the different algorithms of each (plus pro's con's etc..). I'm trying to determine that best garbage collection algorithm to use for different scenarios.
such as: everything on heap same size, everything small w/ short lifespan, everything large w/ longer lifespan.
-if the everything is the same size heap fragmentation isn't an issue. Also I wouldn't have to worry about compaction. So maybe reference counting?
-small obj w/ short lifespan?
-large obj w/ longer lifespan? (possibly generational because of lifespan)
I'm looking at: Reference counting, Mark & Sweep, Stop & Copy and Generational
Paul Wilson's paper, "Uniprocessor Garbage Collection Techniques" is a very handy survey of garbage collection algorithms. It's a few years old, but most of what he covers is still relevant today. And, he includes information on performance, and so on. Just remember that CPU instructions aren't as expensive as they were 20 years ago. ;)
http://www.cse.nd.edu/~dthain/courses/cse40243/spring2006/gc-survey.pdf
Does anyone known of a a good reference for canonical CS problems?
I'm thinking of things like "the sorting problem", "the bin packing problem", "the travailing salesman problem" and what not.
edit: websites preferred
You can probably find the best in an algorithms textbook like Introduction to Algorithms. Though I've never read that particular book, it's quite renowned for being thorough and would probably contain most of the problems you're likely to encounter.
"Computers and Intractability: A guide to the theory of NP-Completeness" by Garey and Johnson is a great reference for this sort of thing, although the "solved" problems (in P) are obviously not given much attention in the book.
I'm not aware of any good on-line resources, but Karp's seminal paper Reducibility among Combinatorial Problems (1972) on reductions and complexity is probably the "canonical" reference for Hard Problems.
Have you looked at Wikipedia's Category:Computational problems and Category:NP Complete Problems pages? It's probably not complete, but they look like good starting points. Wikipedia seems to do pretty well in CS topics.
I don't think you'll find the answers to all those problems in only one book. I've never seen any decent, comprehensive website on algorithms, so I'd recommend you to stick to the books. That said, you can always get some introductory material on canonical algorithm texts (there are always three I usually recommend: CLRS, Manber, Aho, Hopcroft and Ullman (this one is a bit out of date in some key topics, but it's so formal and well-written that it's a must-read). All of them contain important combinatorial problems that are, in some sense, canonical problems in computer science. After learning some fundamentals in graph theory you'll be able to move to Network Flows and Linear Programming. These comprise a set of techniques that will ultimately solve most problems you'll encounter (linear programming with the variables restricted to integer values is NP-hard). Network flows deals with problems defined on graphs (with weighted/capacitated edges) with very interesting applications in fields that seemingly have no relationship to graph theory whatsoever. THE textbook on this is Ahuja, Magnanti and Orlin's. Linear programming is some kind of superset of network flows, and deals with optimizing a linear function on variables subject to restrictions in the form of a linear system of equations. A book that emphasizes the relationship to network flows is Bazaraa's. Then you can move on to integer programming, a very valuable tool that presents many natural techniques for modelling problems like bin packing, task scheduling, the knapsack problem, and so on. A good reference would be L. Wolsey's book.
You definitely want to look at NIST's Dictionary of Algorithms and Data Structures. It's got the traveling salesman problem, the Byzantine generals problem, the dining philosophers' problem, the knapsack problem (= your "bin packing problem", I think), the cutting stock problem, the eight queens problem, the knight's tour problem, the busy beaver problem, the halting problem, etc. etc.
It doesn't have the firing squad synchronization problem (I'm surprised about that omission) or the Jeep problem (more logistics than computer science).
Interestingly enough there's a blog on codinghorror.com which talks about some of these in puzzle form. (I can't remember whether I've read Smullyan's book cited in the blog, but he is a good compiler of puzzles & philosophical musings. Martin Gardner and Douglas Hofstadter and H.E. Dudeney are others.)
Also maybe check out the Stony Brook Algorithm Repository.
(Or look up "combinatorial problems" on google, or search for "problem" in Wolfram Mathworld or look at Hilbert's problems, but in all these links many of them are more pure-mathematics than computer science.)
#rcreswick those sound like good references but fall a bit shy of what I'm thinking of. (However, for all I know, it's the best there is)
I'm going to not mark anything as accepted in hopes people might find a better reference.
Meanwhile, I'm going to list a few problems here, fell free to add more
The sorting problem Find an order for a set that is monotonic in a given way
The bin packing problem partition a set into a minimum number of sets where each subset is "smaller" than some limit
The travailing salesman problem Find a Hamiltonian cycle in a weighted graph with the minimum total weight