Why is 0128 in octal considered not valid to convert in decimal? - decimal

I'm practicing for an exam and I'm doing literals, what came up to me was a question that asked to convert 0128 octal into a decimal , so I also have the solution for this question which is that it has too many bits to be considered an octal so it can't be converted into a decimal as well, but the motivation of it is not described.
Do you know why because I'm trying to figure it out, but I couldn't find any answer yet.

One answer is "invalid input" but a different answer might be to consider the input as "012" with the first non-octal character acting as the termination of the octal number. The answer would therefore be 10 decimal.

Related

KDB: generate random string

How can one generate a random string of a given length in KDB? The string should be composed of both upper and lower case alphabet characters as well as digits, and the first character can not be a digit.
Example:
"i0J2Jx3qa" / OK
"30J2Jx3qa" / bad
Thank you very much for your help!
stringLength: 13
randomString: (1 ? .Q.A,.Q.a) , ((stringLength-1) ? .Q.nA,.Q.a)
If you prefer without the repetitions:
raze(1,stringLength-1)?'10 0_\:.Q.nA,.Q.a
For the purposes of creating random data you can also use ?/deal with a number of characters up to 8 as a symbol(which you could string). This doesn't include numbers though so just an alternative approach to your own answer.
1?`8
,`bghgobnj
There's already a fine answer above which has been accepted. Just wanted somewhere to note that if this is to generate truly random data you need to consider randomising your seed. This can be done in Linux by using $RANDOM in bash or reading up to four bytes from /dev/random (relatively recent versions of kdb can read directly from FIFOs).
Otherwise the seed is set to digits from pi: 314159

Lexicographical order of numbers

I'm currently learning about lexicographical sorting but not much is found for numbers. The example i found is based of What is lexicographical order?
In the example, it i said that
1 10 2
are in lexicographical ordering. The answer stated that "10 comes after 2 in numerical order but 10 comes before 2 in alphabetical order". I would like to know what does "10 comes before 2 in alphabetical order" really mean. Is 10 represented as a character in ASCII or something? I'm really confused.
Would it be something in python where:
ord(10)
Yes, lexicographic implies textual. I would fault the typography. When discussing a text string, that is usually made clear by using the literal text string syntax (for some programming language). "10" comes before "2".
There is no text but encoded text.
So that implies a character encoding of a character set. A character set is a mapping between a character and a codepoint (integer). An encoding maps between a codepoint and a sequence of code units for that encoding. A code unit is an integer of a fixed size. When an integer of a fixed size is stored as a sequence of bytes, it has a byte order (unless the size is 1).
Lexicographic could refer to ordering by the sequence of:
codepoint values
code unit values
byte value
For some character sets and encodings, these orders would all be the same. For some of those, the values would all be the same.
(Not sure why you would mention ASCII. You are almost certainly not using a programming environment that uses ASCII natively. You should look that up for your environment to avoid ASCII-splaining. Python 3.)

FORTRAN 77 Read from .mtx file [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I've been trying to use Fortran for my research project, with the GNU Fortran compiler (gfortran), latest version,
but I've been encountering some problems in the way it processes real numbers. If you have for example the code:
program test
implicit none
real :: y = 23.234, z
z = y * 100000
write(*,*) y, z
end program
You'll get as output:
23.23999 2323400.0
I find this really strange.
Can someone tell me what's exactly happening here? Looking at z I can see that y does retain its precision, so for calculations that shouldn't be a problem I suppose. But why is the output of y not exactly the same as the value that I've specified, and what can I do to make it exactly the same?
This is not a problem - all you see is floating-point representation of the number in the computer. The computer cannot handle real numbers exactly, but only approximations of them. A good read about this can be found here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Simply by replacing real with double precision, you can increase the number of significant decimal places from about six to about 15 on most platforms.
The general issue is not limited to Fortran, but the representation of base 10 real numbers in another base of finite precision. This computer science question is asked many times here.
For the specifically Fortran aspects, the declaration "real" will likely give you a single precision floating point. As will expressing a constant as "23.234" without a type qualifier. The constant "100000" without a decimal point is an integer so the expression "y * 100000" is causing an implicit conversion of an integer to a real because "y" is a real variable.
For previous some previous discussions of these issues see Extended double precision , Fortran: integer*4 vs integer(4) vs integer(kind=4) and Is There a Better Double-Precision Assignment in Fortran 90?
The problem here is not with Fortran, in fact it is not a problem at all. This is just a feature of floating-point arithmetic. If you think about how you would represent 23.234 as a 'single float' in binary, you would see that the number has to be saved to only so many decimals of precision.
The thing to remember about float point number is: numbers that look round and even in base-10 probably won't in binary.
For a brief overview of floating-point topics, check the Wikipedia article. And for a VERY thorough explanation, check out the canonical paper by Goldberg (PDF).

The best way to resolve ambiguity for hexadecimal versus decimal strings

Let's say I want to accept user input as a string -- and it can either be a decimal or hexadecimal string -- and then I want to parse it into an integer.
The problem is, for some strings this is ambiguous: "12345", "00001", and other short strings with no "letter" digits.
So, I'd like to allow some way for the users to disambiguate those strings. Obviously they can prefix with "0x" if the string is actually supposed to be a hexadecimal integer, but if it's supposed to be decimal what should they do?
This seems like such a common problem, it must've been solved before.
Is there some sort of standard that's been adopted?

Why doesn't SameText work?

Why does
if SameText(ListBox1.Items[i],Edit1.Text)=true then
not work? It is case-sensitive (strings have different cases), but must be not. The strings are unicode. It works if the strings have the same cases.
Thanks!
According to SysUtils.pas (Delphi-XE), SameText "has the same 8-bit limitations as CompareText", and in CompareText "the compare operation is based on the 8-bit ordinal value of each character, after converting 'a'..'z' to 'A'..'Z', and is not affected by the current user locale."
So it seems that you are trying to compare some characters that are outside the 8 bit range.
Edit: you should try AnsiSameText.

Resources