Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I need your help: does anyone know how I could present a float in Java Card?
The floating point number I need is 0.9. I've heard that I need to use a float point or something like this but I am not really sure.
Just like most Java Card implementations do not feature a 32-bit integer, they do not contain floating point arithmetic either. This however even goes much deeper: the compiled byte code for floating point is not even supported. So in the end you will have to do this yourself or look for vendor support. Note that most smart card CPU cores won't do floating point either, so it would have to be emulated using integer arithmetic.
If you require arithmetic on real numbers for money calculations or similar then you'd best look into fixed-point arithmetic. One trick is to simply perform calculations where each value is multiplied with 100, i.e. do calculations using cents. So then 0.90 times 10 would become 90 times 10. Then - at the terminal - you can simply re-insert the comma.
If you want to do integer calculations (optional but usually not supported), check my X-mas special answer here ... probably a contender for most complex answer on SO when it comes to code. This way you can do 32 bit calculations which you may need to handle any kind of precision (of ~9 decimal digits instead of ~4 that you get with shorts).
If you only need to store a float then simply encode the floating point to bytes, e.g. using DataOutputStream and store the resulting bytes. Or encode a packed BCD and use a byte to represent where the comma needs to be.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What exactly is the difference between a constant and a variable. Can constants be understood as values that can be assigned to variables in a program
You basically have the right idea. The correct idea is a bit hard to give with the small amount of information you give in your question.
"constant" is a bit vague. The name is used to refer to literals, symbolic constants, constant expressions, immutable variables ...
Anyway, the answer depends strongly on what language you're using and what context you've heard the term "constant" in.
For example, in the C programming language, most symbolic constants do not exist at runtime. They are simply names that are replaced by their actual literal values as the first step before compilation.
In other languages, constants are named variables that are saved into the built program that contain a value that can't be changed, and can be listed or so.
Wait, constants are sometimes variables?
Well, the terms "constant" and "variable" are kind of vague concepts that are sometimes used wrongly, and don't have a straight translation into machine code.
At the core, there is just memory. And memory contains data. Constants are usually parts of the memory that are just loaded from disk for you by the operating system together with your compiled code, and then your code can read them. Variables are parts of the memory for which "gaps" in memory are set aside by the system, and then your code can put values into it or read it.
That's why it's a bit hard to offer a concise definition of a variable and a constant. It depends on what level of the computer you are looking at it, through which language.
In most languages, a symbolic constant is simply a more convenient name you can use in your code to refer to a fixed number or other literal value. A name whose value you can change in one central location before you compile your code, and all other places that use the symbolic name automatically pick up the value.
Variables are boxes into which you can put any value.
So you're basically right. But there can be more to the story depending on what language you're using.
The reason for symbolic constants is mostly to make your code more readable. Instead of
leftCoordinate = 16 + 20 + 4
you can write
leftCoordinate = LEFT_MARGIN + SIDEBAR_WIDTH + LINE_WIDTH
and suddenly it is much more obvious which of these numbers you have to change to change the right part. Also, you can use them to make sure two numbers always match. Like, elsewhere in your program, you may have the code that draws the "line" mentioned above, and just do
setLineWidth(LINE_WIDTH)
drawLine(LEFT_MARGIN + SIDEBAR_WIDTH, 0, LEFT_MARGIN + SIDEBAR_WIDTH, 100)
And if you ever decide you want a thinner line, you just change the constant's value, and all your code magically updates, and you just need to recompile.
The code below would result in moneyDouble = 365.24567874299998 and I need it to be exactly 365.245678743
I wouldn't mind having to set a precision and getting some extra zeros to the right.
This number is used to calculate money transaction so it needs to be exact.
std::string money ("365.245678743");
std::string::size_type sz; // alias of size_t
double moneyDouble = std::stod (money,&sz);
Floating-point numbers and exact precision don't mix, period [link]. For this reason, monetary calculations should never be done in floating-point [link]. Use a fixed-point arithmetic library, or just use integer arithmetic and interpret it as whatever fractional part you need. Since your precision requirements seem to be very high (lots of decimals), a big number library might be necessary.
While library recommendations are off-topic on Stack Overflow, this old question seems to offer a good set of links to libraries you might find useful.
The result of your erroneous output of moneyDouble is because moneyDouble is a floating point number. They cannot express tenths, hundredths, thousandths, etc exactly.
Furthermore, floating-point numbers are really expressed in binary form, meaning that only (some) binary numbers can be expressed exactly in floating point. Not to mention that they have finite accuracy, so they can only store only a limited number of digits (including those after the decimal point).
Your best bet is to use fixed-point arithmetic, integer arithmetic, or implement a rational-number class, and you might need number libraries since you may have to deal with very big numbers in very high precision.
See Is floating point math broken? for more information about the unexpected results of floating-point accuracy.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I've been trying to use Fortran for my research project, with the GNU Fortran compiler (gfortran), latest version,
but I've been encountering some problems in the way it processes real numbers. If you have for example the code:
program test
implicit none
real :: y = 23.234, z
z = y * 100000
write(*,*) y, z
end program
You'll get as output:
23.23999 2323400.0
I find this really strange.
Can someone tell me what's exactly happening here? Looking at z I can see that y does retain its precision, so for calculations that shouldn't be a problem I suppose. But why is the output of y not exactly the same as the value that I've specified, and what can I do to make it exactly the same?
This is not a problem - all you see is floating-point representation of the number in the computer. The computer cannot handle real numbers exactly, but only approximations of them. A good read about this can be found here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Simply by replacing real with double precision, you can increase the number of significant decimal places from about six to about 15 on most platforms.
The general issue is not limited to Fortran, but the representation of base 10 real numbers in another base of finite precision. This computer science question is asked many times here.
For the specifically Fortran aspects, the declaration "real" will likely give you a single precision floating point. As will expressing a constant as "23.234" without a type qualifier. The constant "100000" without a decimal point is an integer so the expression "y * 100000" is causing an implicit conversion of an integer to a real because "y" is a real variable.
For previous some previous discussions of these issues see Extended double precision , Fortran: integer*4 vs integer(4) vs integer(kind=4) and Is There a Better Double-Precision Assignment in Fortran 90?
The problem here is not with Fortran, in fact it is not a problem at all. This is just a feature of floating-point arithmetic. If you think about how you would represent 23.234 as a 'single float' in binary, you would see that the number has to be saved to only so many decimals of precision.
The thing to remember about float point number is: numbers that look round and even in base-10 probably won't in binary.
For a brief overview of floating-point topics, check the Wikipedia article. And for a VERY thorough explanation, check out the canonical paper by Goldberg (PDF).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book.
Explain why the following is not a legitimate Turing machine.
M = {
The input is a polynomial p over variables x1, ..., xn
Try all possible settings of x1, ..., xn to integer values
Evaluate p on all of these settings
If any of these settings evaluates to 0, accept; otherwise reject.
}
This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size?
Although this is quite an informal way of describing a Turing machine, I'd say the problem is one of the following:
otherwise reject - i agree with Welbog on that. Since you have a countably infinite set of possible settings, the machine can never know whether a setting on which it evaluates to 0 is still to come, and will loop forever if it doesn't find any - only when such a setting is encountered, the machine may stop. That last statement is useless and will never be true, unless of course you limit the machine to a finite set of integers.
The code order: I would read this pseudocode as "first write all possible settings down, then evaluate p on each one" and there's your problem:
Again, by having an infinite set of possible settings, not even the first part will ever terminate, because there never is a last setting to write down and continue with the next step. In this case, not even can the machine never say "there is no 0 setting", but it can never even start evaluating to find one. This, too, would be solved by limiting the integer set.
Anyway, i don't think the problem is the alphabet's size. You wouldn't use an infinite alphabet since your integers can be written in decimal / binary / etc, and those only use a (very) finite alphabet.
I'm a bit rusty on turing machines, but I believe your reasoning is correct, ie the set of integers is infinite therefore you cannot compute them all. I am not sure how to prove this theoretically though.
However, the easiest way to get your head around Turing machines is to remember "Anything a real computer can compute, a Turing machine can also compute.". So, if you can write a program that given a polynomial can solve your 3 questions, you will be able to find a Turing machine which can also do it.
I think the problem is with the very last part: otherwise reject.
According to countable set basics, any vector space over a countable set is countable itself. In your case, you have a vector space over the integers of size n, which is countable. So your set of integers is countable and therefore it is possible to try every combination of them. (That is to say without missing any combination.)
Also, computing the result of p on a given set of inputs is also possible.
And entering an accepting state when p evaluates to 0 is also possible.
However, since there is an infinite number of input vectors, you can never reject the input. Therefore no Turing machine can follow all of the rules defined in the question. Without that last rule, it is possible.
Are there a finite number of questions that can be asked regarding a specific language (and or topic), for example - for T-SQL given that there are only so many commands, can there be a limited number of non-repetitive questions? and if so can you use that to determine sizing for a site like stackoverflow and to determine the probability of a new question being a repeat of a prior one? If there is a finite number, how would you determine/calculate it: for instance, T-SQL has x number of commands, each one can have a set of relevant questions (syntax, example of use, etc.) - so could the # of questions = x times potential questions time some relevant variation? or something like that?
No, since, theoretically, programs can be of infinite length, and this site is not just about language commands, but programs developed with those languages.
I'm pretty sure Turing says no, and if you don't believe him them Gödel might have something to say about it.
A stack overflow question is expressed as a finite length sequence of bytes. One could in principle consider the question body in terms of an integer, expressed lowest digit first, in base 256 (or larger, if you wish to think about it as unicode). This is a bijection between questions and whole numbers. Therefore the set of all stack overflow questions has a countably infinite cardinality (How do i typeset \aleph_0 in SO?).