Category Theory fundamentals - haskell

I am looking for references on Category Theory that
are mature (== at least 5 years old)
at a level of university education (not post-doctorate, ultra symbolic introductions)
start from the basics (Abelian Group, Set theory known - similar level) and avoid introducing new terms before defining them (counterexample: Wikipedia, as if you take any definitions, you will understand that now you have to look up an exponentially increasing number of words)
preferably support a full conceptual understanding that is useful for both Haskell and the corresponding mathematics as well
The problem I am trying to solve is: maximizing the use of paradigms and features of Haskell (instead of blindly accepting that this is e.g. an Applicative, so what.) I am using (or eventually going to use) Haskell in automated reasoning.
I put all these explicitly there so that we can avoid flagging with
Primarily opinion based (these are very explicit criteria)
Product Recommendation (since I am asking for mature references, answers will not become quickly obsolete)

My opinion:
The Harold Simmons - "An introduction to Category theory" - Cambridge University Press, 2011. Is a good start to Category theory.
^ This introductory book is only 200 pages but does what you request. It targeted for undergraduates and starts from basics and explains the most of the terms of clean math of the Category theory. 200 pages basically to form the view of the field. (and to read back afterward). Not just my word (I write the Haskell-Cat book of my own): Chris Allen, one of the authors of "Haskell Programming from First Principles", pointed out in his talks that it is a good Category theory learning material he cracked the theory through.
Bartosz Milewski - "Category Theory for Programmers". And his open lectures. He is amazing, but I think briefly reading through the 200 books first would make his material and to learn it, understand, evaluated, and so recorded, remembered better.
Then what you interested in is:
David I. Spivak - "Category Theory for the Sciences" - The MIT Press, 2014. It talks less about the theory, but gives better examples of the applications. Spivak's name already talks for itself, he is known as the "Applied Category theory" guy.
I think the order or cross-over sequencing of them is not as important, it depends on how the mind of a person works and what path the person goes to reside.

Related

machine representation of natural text

I'm currently working on high-level machine representation of natural text.
For example,
"I had one dog but I gave it to Danny who didn't have any"
would be
I.have.dog =1
I.have.dog -=1
Danny.have.dog = 0
Danny.have.dog +=1
something like this....
I'm trying to find resources, but can't really find matching topics..
Is there a valid subject name for this type of research? Any library of resources?
Natural logic sounds like something related but it's not really the same thing I'm working on. Please help me out!
Representing natural language's meaning is the domain of computational semantics. Within that area, lots of frameworks have been developed, though the basic one is still first-order logic.
Specifically, your problem seems to be that of recognizing discourse semantics, which deals with information change brought about by language use. This is pretty much an open area of research, so expect to find a lot of research papers and PhD positions, but little readily-usable software.
As larsmans already said, this is pretty much a really open field of research, called computational semantics (a subfield of computational linguistics.)
There's one important thing that you'll need to understand before starting off in the comp-sem world: most people there use fancy high-level languages. By high-level I don't mean C, but more something like LISP, Prolog, or, as of late, Haskell. Computational semantics is very close to logic, which is why people researching the topic are more comfortable with functional and logical languages — they're closer to what they actually use all day long.
It will also be very useful for you to first look at some foundational course in predicate logic, since that's what the underlying literature usually takes for granted.
A good introduction to the connection between logic and language is L.T.F. Gamut — Logic, Language, and Meaning, volume I. This deals with the linguistic side of semantics, which won't help you implement anything, but it will help you understand the following literature. That said, there are at least some books that will explain predicate logic as they go, but if you ask me, any person really interested in the representation of language as a formal system should take a course in predicate and possibly intuitionist and intensional logic.
To give you a bit of a peek, your example is rather difficult to treat for
current comp-sem approaches. Not impossible, but already pretty high up the
scale of difficulty. What makes it difficult is the tense for one part (dealing
with tense and aspect will typically bring you into even semantics,) but also
that you'd have to define the give and have relations in a way that
works for this example. (An easier example to work with would be, say "I had
a dog, but I gave it to Danny who didn't have any." Can you see why?)
Let's translate "I have a dog."
∃x[dog(x) ∧ have(I,x)]
(There is an object x, such that x is a dog and the have-relation holds between
"I" and x.)
These sentences would then be evaluated against a model, where the "I"
constant might already be defined. By evaluating multiple sentences in sequence,
you could then alter that model so that it keeps track of a conversation.
Let's give you some suggestions to start you off.
The classic comp-sem system is
SHRDLU, which places geometric
figures of certain color in a virtual environment. You can play around with it, since there's a Windows-compatible demo online at that page I linked you to.
The best modern book on the topic is probably Blackburn and Bos
(2005). It's written in Prolog, but
there are sources linked on the page to learn Prolog
(now!)
Van Eijck and Unger give a good course on computational semantics in Haskell, which is a bit more recent, but in my eyes not quite as educational in terms of raw computational semantics as Blackburn and Bos.

Recursion schemes for dummies?

I'm looking for some really simple, easy-to-grasp explanations of recursion schemes and corecursion schemes (catamorphisms, anamorphisms, hylomorphisms etc.) which do not require following lots of links, or opening a category theory textbook. I'm sure I've reinvented many of these schemes unconsciously and "applied" them in my head during the process of coding (I'm sure many of us have), but I have no clue what the (co)recursion schemes I use are called. (OK, I lied. I've just been reading about a few of them, which prompted this question. But before today, I had no clue.)
I think diffusion of these concepts within the programming community has been hindered by the forbidding explanations and examples one tends to come across - for example on Wikipedia, but also elsewhere.
It's also probably been hindered by their names. I think there are some alternative, less mathematical names (something about bananas and barbed wire?) but I have no clue what the cutsier names are for recursion schemes that I use, either.
I think it would help to use examples with datatypes representing simple real-world problems, rather than abstract data types such as binary trees.
Extremely loosely speaking, a catamorphism is just a slight generalization of fold, and an anamorphism is a slight generalization of unfold. (And a hylomorphism is just an unfold followed by a fold.). They're presented in a more rigorous form usually, to make the connection to category theory clearer. The denser form lets us distinguish data (the necessarily finite product of an initial algebra) and codata (the possibly infinite product of a final coalgebra). This distinction lets us guarantee that a fold is never called on an infinite list. The other reason for the funny way that catamorphisms and anamorphisms are generally written is that by operating over F-algebras and F-coalgebras (generated from functors) we can write them once and for all, rather than once over a list, once over a binary tree, etc. This in turn helps make clear exactly why they're all the same thing.
But from a pure intuition standpoint, you can think of cata and ana as reducing and producing, and that's about it.
Edit: a bit more
A metamorphism (Gibbons) is like an inside-out hylo -- its a fold followed by an unfold. So you can use it to tear down a stream and build up a new one with a potentially different structure.
Ekmett posted a nice "field guide" to the various schemes in the literature: http://comonad.com/reader/2009/recursion-schemes/
However, while the "intuitive" explanations are straightforward, the linked code is less so, and the blog posts on some of these might be a tad on the complex/forbidding side.
That said, except perhaps for histomorphisms I don't think the rest of the zoo is necessarily something you'd want to think with directly most of the time. If you "get" hylo and meta, you can express nearly anything in terms of them alone. Typically the other morphisms are more restrictive, not less (but therefore give you more properties "for free").
A few references, from the most category-theoretic (but relevant to give a "territory map" that will let you avoid "clicking lots of links") to the simpler & more self-contained:
As far as the "bananas & barbed wire" vocabulary goes, this comes from the original paper of Meijer, Fokkinga & Patterson (and its sequel by other authors), and it is in sum just as notation-heavy as the less cute alternatives : the "names" (bananas, etc) are just a shortcut to the graphical appearance of the ascii notation of the constructions they are pegged to. For example, catamorphisms (i.e. folds) are represented with (| _ |), and the par-with-parenthesis looks like a "banana", hence the name. This is the paper who is most often called "impenetrable", hence not the first thing I'd look up if I were you.
The basic reference for those recursion schemes (or more precisely, for a relational approach to those recursion schemes) is Bird & de Moor's Algebra of Programming (the book is unavailable except as a print-on demand, but there are copies available second-hand & it should be in libraries). It contains a more paced & detailed explanation of point-free programming, if still "academic" : the book introduces some category-theoretic vocabulary, though in a self-contained manner. Yet, the exercises (that you wouldn't find in a paper) help.
Sorting morphisms by Lex Augustjein, uses sorting algorithms on various data structures to explain recursion schemes. It is pretty much "recursion schemes for dummies" by construction:
This presentation gives the opportunity to introduce the various morphisms in
a simple way, namely as patterns of recursion that are useful in functional programming, instead of the usual approach via category theory, which tends to be needlessly intimidating for the average programmer.
Another approach to making a symbols-free presentation is Jeremy Gibbons' chapter Origami Programming in The Fun of Programming, with some overlap with the previous one. Its bibliography gives a tour of the introductions to the topic.
Edit : Jeremy Gibbons just let me know he has added a link to the bibliography of the whole book on the book's webpage after reading this question. Enjoy !
I'm afraid these last two references only give a solid explanation of (cata|ana|hylo|para)morphisms, but my hope is that this would be enough to tear through the algebraic formalism you can find in more notation-heavy publications. I don't know of any strictly non-category-theoretic explanation of (co-)recursion schemes other than those four.
Tim Williams gave a brilliant talk at the London Haskell User Group last night about recursion schemes with a motivating example of each of the ones you mention. Check out the slides:
http://www.timphilipwilliams.com/slides.html
There are references to all the usual suspects (lenses, bananas, barbed wire ala carte etc) at the end of the slides and you could also google "Origami Programming" which is a nice intro that I hadn't come across before.
and the video will be here when it's uploaded:
http://www.youtube.com/user/LondonHaskell
edit Most of the links in question are in huitseeker's answer above.

What are good starting points for someone interested in natural language processing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Question
So I've recently came up with some new possible projects that would have to deal with deriving 'meaning' from text submitted and generated by users.
Natural language processing is the field that deals with these kinds of issues, and after some initial research I found the OpenNLP Hub and university collaborations like the attempto project. And stackoverflow has this.
If anyone could link me to some good resources, from reseach papers and introductionary texts to apis, I'd be happier than a 6 year-old kid opening his christmas presents!
Update
Through one of your recommendations I've found opencyc ('the world's largest and most complete general knowledge base and commonsense reasoning engine'). Even more amazing still, there's a project that is a distilled version of opencyc called UMBEL. It features semantic data in rdf/owl/skos n3 syntax.
I've also stumbled upon antlr, a parser generator for 'constructing recognizers, interpreters, compilers, and translators from grammatical descriptions'.
And there's a question on here by me, that lists tons of free and open data.
Thanks stackoverflow community!
Tough call, NLP is a much wider field than most people think it is. Basically, language can be split up into several categories, which will require you to learn totally different things.
Before I start, let me tell you that I doubt you'll have any notable success (as a professional, at least) without having a degree in some (closely related) field. There is a lot of theory involved, most of it is dry stuff and hard to learn. You'll need a lot of endurance and most of all: time.
If you're interested in the meaning of text, well, that's the Next Big Thing. Semantic search engines are predicted as initiating Web 3.0, but we're far from 'there' yet. Extracting logic from a text is dependant on several steps:
Tokenization, Chunking
Disambiguation on a lexical level (Time flies like an arrow, but fruit flies like a banana.)
Syntactic Parsing
Morphological analysis (tense, aspect, case, number, whatnot)
A small list, off the top of my head. There's more :-), and many more details to each point. For example, when I say "parsing", what is this? There are many different parsing algorithms, and there are just as many parsing formalisms. Among the most powerful are Tree-adjoining grammar and Head-driven phrase structure grammar. But both of them are hardly used in the field (for now). Usually, you'll be dealing with some half-baked generative approach, and will have to conduct morphological analysis yourself.
Going from there to semantics is a big step. A Syntax/Semantics interface is dependant both, on the syntactic and semantic framework employed, and there is no single working solution yet. On the semantic side, there's classic generative semantics, then there is Discourse Representation Theory, dynamic semantics, and many more. Even the logical formalism everything is based on is still not well-defined. Some say one should use first-order logic, but that hardly seems sufficient; then there is intensional logic, as used by Montague, but that seems overly complex, and computationally unfeasible. There also is dynamic logic (Groenendijk and Stokhof have pioneered this stuff. Great stuff!) and very recently, this summer actually, Jeroen Groenendijk presented a new formalism, Inquisitive Semantics, also very interesting.
If you want to get started on a very simple level, read Blackburn and Bos (2005), it's great stuff, and the de-facto introduction to Computational Semantics! I recently extended their system to cover the partition-theory of questions (question answering is a beast!), as proposed by Groenendijk and Stokhof (1982), but unfortunately, the theory has a complexity of O(n²) over the domain of individuals. While doing so, I found B&B's implementation to be a bit, erhm… hackish, at places. Still, it is going to really, really help you dive into computational semantics, and it is still a very impressive showcase of what can be done. Also, they deserve extra cool-points for implementing a grammar that is settled in Pulp Fiction (the movie).
And while I'm at it, pick up Prolog. A lot of research in computational semantics is based on Prolog. Learn Prolog Now! is a good intro. I can also recommend "The Art of Prolog" and Covington's "Prolog Programming in Depth" and "Natural Language Processing for Prolog Programmers", the former of which is available for free online.
Chomsky is totally the wrong source to look to for NLP (and he'd say as much himself, emphatically)--see: "Statistical Methods and Linguistics" by Abney.
Jurafsky and Martin, mentioned above, is a standard reference, but I myself prefer Manning and Schütze. If you're serious about NLP you'll probably want to read both. There are videos of one of Manning's courses available online.
If you get through Prolog until the DCG chapter in Learn Prolog Now! mentioned by Mr. Dimitrov above, you'll have a good beginning at getting some semantics into your system, since Prolog gives you a very simple way of maintaining a database of knowledge and belief, which can be updated through question-answering.
As regards the literature, I have one major recommendation for you: run out and buy Speech and Language Processing by Jurafsky & Martin. It is pretty much the book on NLP (the first chapter is available online); used in a frillion university courses but also very readable for the non-linguist and practically oriented, while at the same time going fairly deep into the linguistics problems. I really cannot recommend it enough. Chapters 17, 18 and 21 seem to be what you're looking for (14, 15 and 18 in the first edition); they show you simple lambda notation which translates pretty well to Prolog DCG's with features.
Oh, btw, on getting the masters in linguistics; if NL semantics is what you're into, I'd rather recommend taking all the AI-related courses you can find (although any courses on "plain" linguistic semantics, logic, logical semantics, DRT, LFG/HPSG/CCG, NL parsing, formal linguistic theory, etc. wouldn't hurt...)
Reading Chomsky's original literature is not really useful; as far as I know there are no current implementations that directly correspond to his theories, all the useful stuff of his is pretty much subsumed by other theories (and anyone who stays near linguists for any matter of time will absorb knowledge of Chomsky by osmosis).
I'd highly recommend playing around with the NLTK and reading the NLTK Book. The NLTK is very powerful and easy to get into.
You could try reading up a bit on phrase structured grammers, which is basically the mathematics behind much language processessing. It's actually not that heavy, being largely based on set and graph theory. I studied it many moons ago as part of a discrete math course, and I guess there are many good references available at this stage.
Edit:Not as much as I expected on google, although this one looks like a good learning source.
One of the early explorers into NLP is Noam Chomsky; he wrote small books on the subject in the 50s through the 70s. You may find that engaging reading.
Cycorp have a short description of how their Cyc knowledge base derives meaning from sentences.
By utilising a massive knowledge base of common facts, the system can determine the most logical parse of a sentence.
A simpler place to begin with the building blocks is the look at the documentation for a package that attempts to do it. I'd recommend the Python [Natural Language Toolkit (NLTK)1, particularly because of their well-written, free book, which is filled with examples. It won't get you all the way to what you want (which is an AI-hard problem), but it will give you a good footing. NLTK has parsers, chunkers, context-free grammars, and more.
This is really hard stuff. I'd start off by getting at least a Masters in Linguistics, and then work towards my PhD in computer science, concentrating on NLP.
The problem is that most of us don't have the understanding of what language is. And without that understanding, it's bloody tough to implement a solution.
Other comments give some readings, which are probably fine if you want to get started playing around with a small subset of the problem, but in order to come up with a really robust solution, then there are no shortcuts. You need the academic background in both disciplines.
A very enjoyable readable introduction is The Language Instinct by Steven Pinker. It goes into the Chomsky stuff and also tells interesting stories from the evolutionary biology angle. Might be worth starting with something like that before diving into Chomsky's papers and related work, if you're new to the subject.

What exactly is Intentional Programming

On my reading spree, I stumbled upon something called Intentional Programming.
I understood it somewhat, but I not fully. If anyone can explain it in better detail, please do. Is it being used in any real application?
You got me started on this one...
Looks like C. Simonyi wanted to step to the next level of abstraction from High level languages. Reduce the dependency of customers on developers to make every change.. in code (cryptic for people not in development).
So he invents this new product called IP, which has a WYSIWYG type GUI editor to create a domain specific model. (i.e. IP has a GUI to create the building blocks for your app.. LISP allowed you to create the meta/building blocks but not in a way that domain experts could easily do it.)
Like the models in UML, the promise is that you can auto-generate the corresponding source code at the "push of a button". So the domain experts can tweak the model in the future and press the Bake button to deliver the next version of the app.
It seems to utilise DSLs however with the added benefit that multiple user-created DSLs can talk with each other via a built-in IP mechanism... which means the finance model and sales model can interact and reuse blocks as needed. As with DSLs, you get the benefit of code that conveys developer intent rather than appeases implementation language constraints.
The idea being to give greater control to the BA and domain experts who actually know what's needed...
Update:
Real world use looks like 'not yet'.. although Simonyi believes 'absolutely in the long term'.
Short Story: MS squished IP in favor of .Net framework, Simonyi left MS and formed his own company 'Intentional Software'.. with the contract that he could use the IP ideas but he would have to rewrite his working proto from the ground up.. (that should slow him down). It's still Work-In-Progress I think.. and being written in C# (to boot)
Sources:
Anything you can do, I can do meta by Scott Rosenberg, MIT Tech Review (2007)
To think till yesterday.. I didn't know a thing about this. Investigative reporter signing off. Going back to day job :)
It's the opposite of what happens when I come home at 2am after a pub crawl and fire up the laptop "just to check my email real quick, hon."
Then, the next day, when I peel open one eye and find my way to the bathroom at the crack of noon, I start brushing my teeth and realize, toothpaste dribbling out of my mouth, that last night I made 4 SVN commits, closed 3 bugs, and figured out how to solve the starvation problem on our distributed locking protocol. And I have no idea how the hell any of it works, anymore.
Or maybe it's what workmad3 said.
It appears to be a method of programming that allows the programmer to expand what is actually in the language to more closely follow their original intent, rather than forcing the programmers intent into the constrained syntax of the language.
It explicitly mentions LISP as a language that supports this, so I'd suggest you read up on this great language :) LISP Macros are exactly what are described in the article, allowing you to indefinitely expand the language to cover almost anything you would care to express. (A fairly common outcome of large LISP systems is that you end up with a domain specific language that is very good for writing specific applications, i.e. writing a word processor ends up with a word processor specific language).
For your last part, yes LISP (and thus Intentional Programming) is used in some projects. Paul Graham is a great proponent of LISP, and other examples of it include the original Crash Bandicoot (a game object creation system was created in LISP for this, including a LISP PlayStation compiler)
I have a slightly different understanding of Intentional Programming (as a more general term, not just what Charles Simonyi is doing). It is closely linked to fluent interfaces and can be achieved, with various degrees of difficulty, in modern Object Orientated languages.
Some of these concepts come from Domain Driven Design (in fact the term "fluent interface" has been popularised by Eric Evans, the author of "the" blue book - Domain Driven Design: Tacking Complexity in the Heart of Software).
The aim is to make business layer code readable by a non-programmer (i.e. a business person). This can be achieved by class and method names that explicitly state the intent of the operation. In my opinion, being explicit and being intentional produces highly readable and maintainable code.
Consider the two examples below that achieve the same thing - creating an order for a customer with 10% discount and adding a couple of products to it.
//C#, Normal version
Customer customer = CustomerService.Get(23);
Order order = new Order();
//What is 0.1? Need to look at Discount property to understand
order.Discount = 0.1;
order.Customer = customer;
//What's 34?
Product product = ProductService.Get(34);
//Do we really care about Order stores OrderLines?
order.OrderLines.Add(new OrderLine(product, 1));
Product product2 = ProductService.Get(54);
order.OrderLines.Add(new OrderLine(product2, 2)); //What's 2?
Order.Submit();
//C#, Fluent version
//byId is named parameter, states that this method looks up customer by Id
ICustomerForOrderCreation customer =
CustomerService.GetCustomerForOrderCreation(byId: 23);
//Explicit method to create a discount order and explicit percentage
Order order = customer.CreateDiscountOrder(10.Percent())
.WithProduct(ProductService.Get(byId: 34))
.WithProduct(ProductService.Get(byId: 54))
.WithQuantity(2); //Explicit quantity
Order.Submit();
By changing your programming style slightly, you are able to communicate your intent more clearly and reduce the amount of having to look at code elsewhere to understand what's going on.
Seems to me like yet another fad of software engineering. We've seen thousands of them already: meta programming, generative programming, visual programming, and so on. For a short time they get very fashionable, people use it everywhere, and then they invariably go back to old ways of creating software.
Why? Frederick Brooks has already answered this question over 20 years ago: there's No Single Silver Bullet to kill the werewolf...
Intentional Programming is encoding your intent, or goals. Thus it is Goal-Oriented Programming or Planning. Step up to manangement.
It's where you intend to program, you don't just accidently do it. ;)

canonical problems list

Does anyone known of a a good reference for canonical CS problems?
I'm thinking of things like "the sorting problem", "the bin packing problem", "the travailing salesman problem" and what not.
edit: websites preferred
You can probably find the best in an algorithms textbook like Introduction to Algorithms. Though I've never read that particular book, it's quite renowned for being thorough and would probably contain most of the problems you're likely to encounter.
"Computers and Intractability: A guide to the theory of NP-Completeness" by Garey and Johnson is a great reference for this sort of thing, although the "solved" problems (in P) are obviously not given much attention in the book.
I'm not aware of any good on-line resources, but Karp's seminal paper Reducibility among Combinatorial Problems (1972) on reductions and complexity is probably the "canonical" reference for Hard Problems.
Have you looked at Wikipedia's Category:Computational problems and Category:NP Complete Problems pages? It's probably not complete, but they look like good starting points. Wikipedia seems to do pretty well in CS topics.
I don't think you'll find the answers to all those problems in only one book. I've never seen any decent, comprehensive website on algorithms, so I'd recommend you to stick to the books. That said, you can always get some introductory material on canonical algorithm texts (there are always three I usually recommend: CLRS, Manber, Aho, Hopcroft and Ullman (this one is a bit out of date in some key topics, but it's so formal and well-written that it's a must-read). All of them contain important combinatorial problems that are, in some sense, canonical problems in computer science. After learning some fundamentals in graph theory you'll be able to move to Network Flows and Linear Programming. These comprise a set of techniques that will ultimately solve most problems you'll encounter (linear programming with the variables restricted to integer values is NP-hard). Network flows deals with problems defined on graphs (with weighted/capacitated edges) with very interesting applications in fields that seemingly have no relationship to graph theory whatsoever. THE textbook on this is Ahuja, Magnanti and Orlin's. Linear programming is some kind of superset of network flows, and deals with optimizing a linear function on variables subject to restrictions in the form of a linear system of equations. A book that emphasizes the relationship to network flows is Bazaraa's. Then you can move on to integer programming, a very valuable tool that presents many natural techniques for modelling problems like bin packing, task scheduling, the knapsack problem, and so on. A good reference would be L. Wolsey's book.
You definitely want to look at NIST's Dictionary of Algorithms and Data Structures. It's got the traveling salesman problem, the Byzantine generals problem, the dining philosophers' problem, the knapsack problem (= your "bin packing problem", I think), the cutting stock problem, the eight queens problem, the knight's tour problem, the busy beaver problem, the halting problem, etc. etc.
It doesn't have the firing squad synchronization problem (I'm surprised about that omission) or the Jeep problem (more logistics than computer science).
Interestingly enough there's a blog on codinghorror.com which talks about some of these in puzzle form. (I can't remember whether I've read Smullyan's book cited in the blog, but he is a good compiler of puzzles & philosophical musings. Martin Gardner and Douglas Hofstadter and H.E. Dudeney are others.)
Also maybe check out the Stony Brook Algorithm Repository.
(Or look up "combinatorial problems" on google, or search for "problem" in Wolfram Mathworld or look at Hilbert's problems, but in all these links many of them are more pure-mathematics than computer science.)
#rcreswick those sound like good references but fall a bit shy of what I'm thinking of. (However, for all I know, it's the best there is)
I'm going to not mark anything as accepted in hopes people might find a better reference.
Meanwhile, I'm going to list a few problems here, fell free to add more
The sorting problem Find an order for a set that is monotonic in a given way
The bin packing problem partition a set into a minimum number of sets where each subset is "smaller" than some limit
The travailing salesman problem Find a Hamiltonian cycle in a weighted graph with the minimum total weight

Resources