I have started to create XSD and found in couple of examples for xs:integer and xs:int.
What is the difference between xs:integer and xs:int?
When I should use xs:integer?
When I should use xs:int?
The difference is the following:
xs:int is a signed 32-bit integer.
xs:integer is an integer unbounded value.
See for details https://web.archive.org/web/20151117073716/http://www.w3schools.com/schema/schema_dtypes_numeric.asp
For example, XJC (Java) generates Integer for xs:int and BigInteger for xs:integer.
The bottom line: use xs:int if you want to work cross platforms and be sure that your numbers will pass without a problem.
If you want bigger numbers – use xs:long instead of xs:integer (it will be generated to Long).
The xs:integer type is a restriction of xs:decimal, with the fractionDigits facet set to zero and with a lexical space which forbids the decimal point and trailing zeroes which would otherwise be legal. It has no minimum or maximum value, though implementations running in machines of finite size are not required to be able to accept arbitrarily large or small values. (They are required to support values with 16 decimal digits.)
The xs:int type is a restriction of xs:long, with the maxInclusive facet set to 2147483647 and the minInclusive facet to -2147483648. (As you can see, it will fit conveniently into a two-complement 32-bit signed-integer field; xs:long fits in a 64-bit signed-integer field.)
The usual rule is: use the one that matches what you want to say. If the constraint on an element or attribute is that its value must be an integer, xs:integer says that concisely. If the constraint is that the value must be an integer that can be expressed with at most 32 bits in twos-complement representation, use xs:int. (A secondary but sometimes important concern is whether your tool chain works better with one than with the other. For data that will live longer than your tool chain, it's wise to listen to the data first; for data that exists solely to feed the tool chain, and which will be of no interest if you change your tool chain, there's no reason not to listen to the tool chain.)
I would just add a note of pedantry that may be important to some people: it's not correct to say that xs:int "is" a signed 32-bit integer. That form of words implies an implementation in memory (or registers, etc) within a binary digital computer. XML is character-based and would implement the maximum 32-bit signed value as "2147483647" (my quotes, of course), which is a lot more than 32 bits! What IS true is that xs:int is (indirectly) a restriction of xs:integer which sets the maximum and minimum allowed values to be the same as the corresponding implementation-imposed limits of a 32-bit integer with a sign bit.
Related
Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits?
It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it.
Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc.
The Haskell language definition states that the type Int covers at least the range [−229, 229−1].
There are other compilers/interpreters that use this property to boost the execution time of the resulting program.
All internal references to (aligned) Haskell data point to memory addresses that are multiple of 4(8) on 32-bit(64-bit) systems. So, references need only 30bits(61bits) and therefore allow 2(3) bits for "pointer tagging".
In case of data, the GHC uses those tags to store information about that referenced data, i.e. whether that value is already evaluated and if so which constructor it has.
In case of 30-bit Ints (so, not GHC), you could use one bit to decide if it is either a pointer to an unevaluated Int or that Int itself.
Pointer tagging could be used for one-bit reference counting, which can speed up the garbage collection process. That can be useful in cases where a direct one-to-one producer-consumer relationship was created at runtime: It would result directly in memory reuse instead of a garbage collector feeding.
So, using 2 bits for pointer tagging, there could be some wild combination of intense optimisation...
In case of Ints I could imagine these 4 tags:
a singular reference to an unevaluated Int
one of many references to the same possibly still unevaluated Int
30 bits of that Int itself
a reference (of possibly many references) to an evaluated 32-bit Int.
I think this is because of early ways to implement GC and all that stuff. If you have 32 bits available and you only need 30, you could use those two spare bits to implement interesting things, for instance using a zero in the least significant bit to denote a value and a one for a pointer.
Today the implementations don't use those bits so an Int has at least 32 bits on GHC. (That's not entirely true. IIRC one can set some flags to have 30 or 31 bit Ints)
I have a C program that I know works on 32-bit systems. On 64-Bit systems (at least mine) it works to a point and then stops. Reading some forums the program may not be 64-bit safe? I assume it has to do with differences of data types between 32-bit and 64-bit systems.
Is a char the same on both? what about int or long or their unsigned variants? Is there any other way a 32-bit program wouldn't be 64-bit safe? If I wanted to verify the application is 64-bit safe, what steps should I take?
Regular data types in C has minimum ranges of values rather than specific bit widths. For example, a short has to be able to represent, at a minimum, -32767 thru 32767 inclusive.
So,yes, if your code depends on values wrapping around at 32768, it's unlikely to behave well if the short is some big honking 128-bit behemoth.
If you want specific-width data types, look into stdint.h for things like int64_t and so on. There are a wide variety to choose from, specific widths, "at-least" widths, and so on. They also mandate two's complement for these, unlike the "regular" integral types:
integer types having certain exact widths;
integer types having at least certain specified widths;
fastest integer types having at least certain specified widths;
integer types wide enough to hold pointers to objects;
integer types having greatest width.
For example, from C11 7.20.1.1 Exact-width integer types:
The typedef name intN_t designates a signed integer type with width N, no padding
bits, and a two’s complement representation. Thus, int8_t denotes such a signed
integer type with a width of exactly 8 bits.
Provided you have followed the rules (things like not casting pointers to integers), your code should compile and run on any implementation, and any architecture.
If it doesn't, you'll just have to start debugging, then post the detailed information and code that seems to be causing problem on a forum site dedicated to such things. Now where have I seen one of those recently? :-)
Why isn't Byte the default backing type for enumerations?
Int32 seems like overkill. Every Enum I have ever used has contained under 20 values. It's a micro-optimization, but it seems as though all common usages have under 256 values [in my limited experience].
Is there a more technical reason, or is just to make sure "all common" scenarios of Enum usage work fine and Int32 isn't really that expensive in all but the corner cases?
In addition I have gotten the implication the compiler does not change the backing type (or no one has mentioned that it optimizes enumerations in what I've read). Does it optimize the backing type since it knows the range of values at compile time?
Because the x86 instruction set supports int32 operations natively. There is no performance gain when using int8 operands. Also are memory accesses to operands that are naturally aligned (32bit on a address divisible by 32) carried out faster than if a sub-word has to be shuffled first.
Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits?
It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it.
Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc.
The Haskell language definition states that the type Int covers at least the range [−229, 229−1].
There are other compilers/interpreters that use this property to boost the execution time of the resulting program.
All internal references to (aligned) Haskell data point to memory addresses that are multiple of 4(8) on 32-bit(64-bit) systems. So, references need only 30bits(61bits) and therefore allow 2(3) bits for "pointer tagging".
In case of data, the GHC uses those tags to store information about that referenced data, i.e. whether that value is already evaluated and if so which constructor it has.
In case of 30-bit Ints (so, not GHC), you could use one bit to decide if it is either a pointer to an unevaluated Int or that Int itself.
Pointer tagging could be used for one-bit reference counting, which can speed up the garbage collection process. That can be useful in cases where a direct one-to-one producer-consumer relationship was created at runtime: It would result directly in memory reuse instead of a garbage collector feeding.
So, using 2 bits for pointer tagging, there could be some wild combination of intense optimisation...
In case of Ints I could imagine these 4 tags:
a singular reference to an unevaluated Int
one of many references to the same possibly still unevaluated Int
30 bits of that Int itself
a reference (of possibly many references) to an evaluated 32-bit Int.
I think this is because of early ways to implement GC and all that stuff. If you have 32 bits available and you only need 30, you could use those two spare bits to implement interesting things, for instance using a zero in the least significant bit to denote a value and a one for a pointer.
Today the implementations don't use those bits so an Int has at least 32 bits on GHC. (That's not entirely true. IIRC one can set some flags to have 30 or 31 bit Ints)
Lua by default uses a double precision floating point (double) type as its only numeric type. That's nice and useful. However, I'm working on software that expects to see 64bit integers, for which I don't get around using actual 64bit integers one way or another.
The place where the integer type becomes relevant is for file sizes. Although I don't truly expect to see file sizes beyond what Lua can represent with full "integer" precision using a double, I want to be prepared.
What strategies can you recommend when using a 64bit integer type in parallel with the default numeric type of Lua? I don't really want to throw the default implementation overboard (and I'm not worried of its performance compared to integer arithmetics), but I need some way of representing 64bit integers up to their full precision without too much of a performance penalty.
My problem is that I'm unsure where to modify the behavior. Should I modify the syntax and extend the parser (numbers with appended LL or ULL come to mind, which to my knowledge doesn't exist in default Lua) or should I instead write my own C module and define a userdata type that represents the 64bit integer, along with library functions able to manipulate the values? ...
Note: yes, I am embedding Lua, so I am free to extend it whichever way I please.
As part of LuaJIT's port to ARM CPUs (which often have poor floating-point), LuaJIT implemented a "Dual-number VM", which allows it to switch between integers and floats dynamically as needed. You could use this yourself, just switch between 64-bit integers and doubles instead of 32-bit integers and floats.
It's currently live in builds, so you may want to consider using LuaJIT as your Lua "interpreter." Or you could use it as a way to learn how to do this sort of thing.
However, I do agree with Marcelo; the 53-bit mantissa should be plenty. You shouldn't really need this for a good 10 years or so.
I'd suggest storing your data outside of Lua and use some type of reference to retrieve it when calling your other libraries. You can then push various results onto the Lua stack for the user the see, you can even retrieve the value as a string to be precise, but I would avoid modifying them in Lua and relying on the Lua values when calling your external library.
If you're not going to need floating-point precision at any point in the program, you can just redefine LUA_NUMBER to __int64 (or whatever 64-bit int may be in your environment) in luaconf.h.
Otherwise, you can just bring in another library to handle your integers- for infinite precision, you can use a bignum library such as lhf's lbn.