Are there a finite number of questions that can be asked regarding a specific language (and or topic), for example - for T-SQL given that there are only so many commands, can there be a limited number of non-repetitive questions? and if so can you use that to determine sizing for a site like stackoverflow and to determine the probability of a new question being a repeat of a prior one? If there is a finite number, how would you determine/calculate it: for instance, T-SQL has x number of commands, each one can have a set of relevant questions (syntax, example of use, etc.) - so could the # of questions = x times potential questions time some relevant variation? or something like that?
No, since, theoretically, programs can be of infinite length, and this site is not just about language commands, but programs developed with those languages.
I'm pretty sure Turing says no, and if you don't believe him them Gödel might have something to say about it.
A stack overflow question is expressed as a finite length sequence of bytes. One could in principle consider the question body in terms of an integer, expressed lowest digit first, in base 256 (or larger, if you wish to think about it as unicode). This is a bijection between questions and whole numbers. Therefore the set of all stack overflow questions has a countably infinite cardinality (How do i typeset \aleph_0 in SO?).
Related
CLP(FD) allows user to set the domain for every wannabe-integer variable, so it's able to solve equations.
So far so good.
However you can't do the same in CLP(R) or similar languages (where you can do only simple inferences). And it's not hard to understand why: the fractional part of a number may have an almost infinite region, putted down by an implementation limit. This mean the search space will be too large to make any practical use for a solver which deals with floats like with integers. So it's the user task to write generator in CLP(R) and set constraint guards where needed to get variables instantiated with numbers (if simple inference is not possible).
So my question here: is there any CLP(FD)-like language over reals? I think it could be implemented by use of number rounding, searching and following incremental approximation.
There are at least some major CLP(FD) solvers that support real (decision) variables:
Gecode
JaCoP
ECLiPSe CLP (ic library)
Choco (using Ibex)
(The first three also support var float in MiniZinc.)
The answser to your question is yes. There is Constraint-based Solvers dedicated for floating numbers. I do not have a list of solvers but I know that that ibex http://www.ibex-lib.org is a library allowing the use of floats. You should also have a look at SMT-Solvers implementing the Real-Theory (http://smtlib.cs.uiowa.edu/solvers.shtml).
I am aware that languages like Prolog allow you to write things like the following:
mortal(X) :- man(X). % All men are mortal
man(socrates). % Socrates is a man
?- mortal(socrates). % Is Socrates mortal?
yes
What I want is something like this, but backwards. Suppose I have this:
mortal(X) :- man(X).
man(socrates).
man(plato).
man(aristotle).
I then ask it to give me a random X for which mortal(X) is true (thus it should give me one of 'socrates', 'plato', or 'aristotle' according to some random seed).
My questions are:
Does this sort of reverse inference have a name?
Are there any languages or libraries that support it?
EDIT
As somebody below pointed out, you can simply ask mortal(X) and it will return all X, from which you can simply pick a random one from the list. What if, however, that list would be very large, perhaps in the billions? Obviously in that case it wouldn't do to generate every possible result before picking one.
To see how this would be a practical problem, imagine a simple grammar that generated a random sentence of the form "adjective1 noun1 adverb transitive_verb adjective2 noun2". If the lists of adjectives, nouns, verbs, etc. are very large, you can see how the combinatorial explosion is a problem. If each list had 1000 words, you'd have 1000^6 possible sentences.
Instead of the deep-first search of Prolog, a randomized deep-first search strategy could be easyly implemented. All that is required is to randomize the program flow at choice points so that every time a disjunction is reached a random pole on the search tree (= prolog program) is selected instead of the first.
Though, note that this approach does not guarantees that all the solutions will be equally probable. To guarantee that, it is required to known in advance how many solutions will be generated by every pole to weight the randomization accordingly.
I've never used Prolog or anything similar, but judging by what Wikipedia says on the subject, asking
?- mortal(X).
should list everything for which mortal is true. After that, just pick one of the results.
So to answer your questions,
I'd go with "a query with a variable in it"
From what I can tell, Prolog itself should support it quite fine.
I dont think that you can calculate the nth solution directly but you can calculate the n first solutions (n randomly picked) and pick the last. Of course this would be problematic if n=10^(big_number)...
You could also do something like
mortal(ID,X) :- man(ID,X).
man(X):- random(1,4,ID), man(ID,X).
man(1,socrates).
man(2,plato).
man(3,aristotle).
but the problem is that if not every man was mortal, for example if only 1 out of 1000000 was mortal you would have to search a lot. It would be like searching for solutions for an equation by trying random numbers till you find one.
You could develop some sort of heuristic to find a solution close to the number but that may affect (negatively) the randomness.
I suspect that there is no way to do it more efficiently: you either have to calculate the set of solutions and pick one or pick one member of the superset of all solutions till you find one solution. But don't take my word for it xd
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to manipulate expressions like 1 + sqrt(3) and do basic arithmetic like addition, subtraction, and division. I'd like the result to be in some sort of canonical form so that it can be used as a key in a map. Turning 1 + sqrt(3) into a float is not feasible due to roundoff problems.
I used SymPy for this task in Python. Is there an equivalent native library for Haskell?
Please check out the numbers package. If all you need is to store exact numbers like "1 + √3", you may want to use Data.Number.CReal instead of symbolic arithmetics. It stores the expressions and can be computed to arbitrary number of digits when needed.
Prelude Data.Number.CReal> let cx = 1 + sqrt (3 :: CReal)
Prelude Data.Number.CReal> showCReal 400 cx
"2.7320508075688772935274463415058723669428052538103806280558069794519330169088000370811461867572485756756261414154067030299699450949989524788116555120943736485280932319023055820679748201010846749232650153123432669033228866506722546689218379712270471316603678615880190499865373798593894676503475065760507566183481296061009476021871903250831458295239598329977898245082887144638329173472241639845878553977"
There is also a Data.Number.Symbolic module in the package but the description says "It's mainly useful for debugging".
It seems you are looking for Computer Algebra System (CAS) in Haskell. Inspite of so many references to algebraic objects in the names of Haskell packages/modules, I've never heard of a general purpose and well-maintained CA system in Haskell (like SymPy or Sage in Python).
However in the list of Computer Algebra Systems on Wikipedia I've found a reference to
DoCon. The Algebraic Domain Constructor
It uses a non-standard license, but I dare say it is still Open Source (though with rename and attribution requirements). As of July 2010 docon-2.11 still builds with GHC 6.12.1 and runs demos/tests (I only had to insert a LANGUAGE FlexibleContexts pragma in one file of the demo).
DoCon is well documented (362 pages of the Manual). Its Manual is packed inside of the zip with sources, so I put it online separately for convenience:
DoCon 2.11 Manual.ps
Please look through to check if it suits your needs.
Check out the cyclotomic package, which implements exact arithmetic on the cyclotomic numbers. These include all algebraic numbers (hence in particular 1+sqrt(3)) and the key operations (like equality) are decidable.
They do not provide an Ord instance (for the same reason the complex numbers do not), but one can implement a non-semantic instance if all one needs is to use them as keys in a lookup table. You may want to contact the author about how to do this correctly, as there may be some invariants that are not obvious (e.g. one may need to be careful about zeros in the coeffs map).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book.
Explain why the following is not a legitimate Turing machine.
M = {
The input is a polynomial p over variables x1, ..., xn
Try all possible settings of x1, ..., xn to integer values
Evaluate p on all of these settings
If any of these settings evaluates to 0, accept; otherwise reject.
}
This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size?
Although this is quite an informal way of describing a Turing machine, I'd say the problem is one of the following:
otherwise reject - i agree with Welbog on that. Since you have a countably infinite set of possible settings, the machine can never know whether a setting on which it evaluates to 0 is still to come, and will loop forever if it doesn't find any - only when such a setting is encountered, the machine may stop. That last statement is useless and will never be true, unless of course you limit the machine to a finite set of integers.
The code order: I would read this pseudocode as "first write all possible settings down, then evaluate p on each one" and there's your problem:
Again, by having an infinite set of possible settings, not even the first part will ever terminate, because there never is a last setting to write down and continue with the next step. In this case, not even can the machine never say "there is no 0 setting", but it can never even start evaluating to find one. This, too, would be solved by limiting the integer set.
Anyway, i don't think the problem is the alphabet's size. You wouldn't use an infinite alphabet since your integers can be written in decimal / binary / etc, and those only use a (very) finite alphabet.
I'm a bit rusty on turing machines, but I believe your reasoning is correct, ie the set of integers is infinite therefore you cannot compute them all. I am not sure how to prove this theoretically though.
However, the easiest way to get your head around Turing machines is to remember "Anything a real computer can compute, a Turing machine can also compute.". So, if you can write a program that given a polynomial can solve your 3 questions, you will be able to find a Turing machine which can also do it.
I think the problem is with the very last part: otherwise reject.
According to countable set basics, any vector space over a countable set is countable itself. In your case, you have a vector space over the integers of size n, which is countable. So your set of integers is countable and therefore it is possible to try every combination of them. (That is to say without missing any combination.)
Also, computing the result of p on a given set of inputs is also possible.
And entering an accepting state when p evaluates to 0 is also possible.
However, since there is an infinite number of input vectors, you can never reject the input. Therefore no Turing machine can follow all of the rules defined in the question. Without that last rule, it is possible.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have never thought about until recently, but I'm not sure why we call strings strings. I am a .NET programmer, but I believe the concept of strings exist in virtually every programming language.
Outside of programming, I don't believe I've heard the word string used to describe words or letters. A quick Google of, 'Define: string' yields a bunch of definitions that have nothing to do with the concept of letters, words, or anything of the nature associated to programming.
My guess of it, is that, back in the day, strings were really just arrays of characters of a particular length, often with a delimiting character at the end. But, I don't see a natural transition from 'character array' to string.
Can someone offer up some insight to why we call strings strings?
My assumption has always been that the programming term originated from the following definition of the word "string" (from Merriam-Webster):
(1): a series of things arranged in or as if in a line <a string of cars> <a string of names>
(2): a sequence of like items (as bits, characters, or words)
Since a string in programming is simply an ordered sequence of characters, referring to this as a "string of characters" (or simply "string") seems like the most probable origin.
From this reference:
The 1971 OED (p. 3097) quotes an 1891
Century Dictionary on a source in the
Milwaukee Sentinel of 11 Jan. 1898
(section 3, p. 1) to the effect that
this is a compositor's term. Printers
would paste up the text that they had
generated in a long strip of
characters. (Presumably, they were
paid by the foot, not by the word!)
The quote says that it was not unusual
for compositors to create more than
1500 (characters?) per hour.
From searching through the ACM bibliography it seems the word string acquired its meaning in computer science during the 1960s. At the beginning a string is a general kind of sequence or list, e.g. A command language for handling strings of symbols from 1958.
This article explicitly mentions "character strings" in 1964.
Unfortunately I can't access the full texts, which are behind a toll booth.
I had guessed that "string" was in use by mathematicians long before its adoption in programming languages. Turing machines effectively operate on strings. Turing may not have used the term, but it is used everywhere in automata textbooks, going back decades.
The earliest reference I could find was a fragment in Google books of a 1944 article "Recursively enumerable sets of positive integers and their decision problems" by logician Emil Post in Bulletin of the AMS. Fortunately, AMS provides online archives of complete articles free for download. Here is a link: http://www.ams.org/journals/bull/1944-50-05/S0002-9904-1944-08111-1/S0002-9904-1944-08111-1.pdf
I think there is little doubt that he is using "string" in the conventional sense used in computer science. P. 286 "For working purposes, we in-
troduce the letter b, and consider "strings" of 1's and b's such as
11b1bb1. An operation on such strings such as "b1bP produces P1bb1"
we term a normal operation. This particular normal operation is ap-
plicable only to strings starting with b1b, and the derived string is
then obtained from the given string by first removing the initial b1b,
and then tacking on 1bb1 at the end. Thus b1bb becomes b1bb1."
I suspect it's because string originally meant just a sequence of data values: "I'll just string these together" etc. These values didn't have to be characters. One very common use for this general concept happened to be a sequence of characters, and this took over as the general meaning of the word.
The earliest reference I could find in computing is from March 1963's METEOR: A LISP Interpreter for String Transformations by Daniel G. Bobrow at MIT's AI Labs.
However, definition 15d. in the Oxford English Dictionary is:
Computing A linear sequence of records or data.
... and with a first quotation from a 1956 Journal of the Association for Computing Machinery:
Areas are set aside for shuttling strings of control fields back and forth until a completely sorted sequence is obtained.
This use naturally follows on from definition 15c.:
Math., etc. A sequence of symbols or linguistic elements in a definite order.
... and first used in Clarence Irving Lewis and Cooper Harold Langford's Symbolic Logic (1932):
Propositions are not strings of marks, or series of sounds, except incidentally.
This in turn follows on from many other, much earlier definitions for things in a line.
The word was originally used to differentiate between a set of values to which the particular order of elements doesn't matter (for instance, a set of random samples of measurements) and another that could only have its meaning preserved when the order is also preserved. Originally a string could be a set of any kind of values, but since in the post-mainframe era a string of characters is by far the most common kind, the fact that the values are characters became a "default".
A string is a sequence of discrete objects (usually char).
Given that, I would probably venture a guess that it may have to do with a metaphor related to "string of pearls". Each bead on the string is a single character.
It's called a strings, because it's actually an array of char type elements.
That being said, they are "stringing together" (or is it strung together) via this array, which turns them into a "string".