C++ const keyword doesn't work for 64-bit Constants - visual-c++

I have an old version of Visual Studio ... 2008 to be exact. I am writing this code in C++.
Let's say I were to declare an integer constant that was 64-bits in size:
const DWORD64 FP_FLAGS = 0x8000800080008000;
...
HRESULT ProcessFootprint(DWORD64 nFpFlags);
...
ProcessFootprint(FP_FLAGS);
If I were to use the debugger to halt execution at the last line, then
FP_FLAGS
would be shown to have the value of
0x0000000080008000
. In other words, the 2 most significant bytes would be ignored. The constant would be truncated to 32 bits. Adding the
ULL
suffix doesn't help.
If on the other hand, I were to make the constant into a variable,
DWORD64 FP_FLAGS = 0x8000800080008000;
...
HRESULT ProcessFootprint(DWORD64 nFpFlags);
...
ProcessFootprint(FP_FLAGS);
Then
FP_FLAGS
would be shown to have the value of
0x8000800080008000
.
What am I doing wrong? Is there some syntax I need to specify? A pre-processor option? Something in the compiler? I know that the
const
keyword is related to the compiler.

Related

Reversing Bytes and cross compatible binary parsing in Nim

I've started taking a look at Nim for hobby game modding purposes.
Intro
Yet, I found it difficult to work with Nim compared to C when it comes to machine-specific low-level memory layout and would like to know if Nim actually has better support here.
I need to control byte order and be able to de/serialize arbitrary Plain-Old-Datatype objects to binary custom file formats. I didn't directly find a Nim library which allows flexible storage options like representing enum and pointers with Big-Endian 32-bit. Or maybe I just don't know how to use the feature.
std/marshal : just JSON, i.e. no efficient, flexible nor binary format but cross-compatible
nim-serialization : seems like being made for human readable formats
nesm : flexible cross-compatibility? (It has some options and has a good interface)
flatty : no flexible cross-compatibility, no byte order?
msgpack4nim : no flexible cross-compatibility, byte order?
bingo : ?
Flexible cross-compatibility means, it must be able to de/serialize fields independently of Nim's ABI but with customization options.
Maybe "Kaitai Struct" is more what I look for, a file parser with experimental Nim support.
TL;DR
As a workaround for a serialization library I tried myself at a recursive "member fields reverser" that makes use of std/endians which is almost sufficient.
But I didn't succeed with implementing byte reversal of arbitrarily long objects in Nim. Not practically relevant but I still wonder if Nim has a solution.
I found reverse() and reversed() from std/algorithm but I need a byte array to reverse it and turn it back into the original object type. In C++ there would be reinterprete_cast, in C there is void*-cast, in D there is a void[] cast (D allows defining array slices from pointers) but I couldn't get it working with Nim.
I tried cast[ptr array[value.sizeof, byte]](unsafeAddr value)[] but I can't assign it to a new variable. Maybe there was a different problem.
How to "byte reverse" arbitrary long Plain-Old-Datatype objects?
How to serialize to binary files with byte order, member field size, pointer as file "offset - start offset"? Are there bitfield options in Nim?
It is indeed possible to use algorithm.reverse and the appropriate cast invocation to reverse bytes in-place:
import std/[algorithm,strutils,strformat]
type
LittleEnd{.packed.} = object
a: int8
b: int16
c: int32
BigEnd{.packed.} = object
c: int32
b: int16
a: int8
## just so we can see what's going on:
proc `$`(b: LittleEnd):string = &"(a:0x{b.a.toHex}, b:0x{b.b.toHex}, c:0x{b.c.toHex})"
proc `$`(l:BigEnd):string = &"(c:0x{l.c.toHex}, b:0x{l.b.toHex}, a:0x{l.a.toHex})"
var lit = LittleEnd(a: 0x12, b:0x3456, c: 0x789a_bcde)
echo lit # (a:0x12, b:0x3456, c:0x789ABCDE)
var big:BigEnd
copyMem(big.addr,lit.addr,sizeof(lit))
# here's the reinterpret_cast you were looking for:
cast[var array[sizeof(big),byte]](big.addr).reverse
echo big # (c:0xDEBC9A78, b:0x5634, a:0x12)
for C-style bitfields there is also the {.bitsize.} pragma
but using it causes Nim to lose sizeof information, and of course bitfields wont be reversed within bytes
import std/[algorithm,strutils,strformat]
type
LittleNib{.packed.} = object
a{.bitsize: 4}: int8
b{.bitsize: 12}: int16
c{.bitsize: 20}: int32
d{.bitsize: 28}: int32
BigNib{.packed.} = object
d{.bitsize: 28}: int32
c{.bitsize: 20}: int32
b{.bitsize: 12}: int16
a{.bitsize: 4}: int8
const nibsize = 8
proc `$`(b: LittleNib):string = &"(a:0x{b.a.toHex(1)}, b:0x{b.b.toHex(3)}, c:0x{b.c.toHex(5)}, d:0x{b.d.toHex(7)})"
proc `$`(l:BigNib):string = &"(d:0x{l.d.toHex(7)}, c:0x{l.c.toHex(5)}, b:0x{l.b.toHex(3)}, a:0x{l.a.toHex(1)})"
var lit = LitNib(a: 0x1,b:0x234, c:0x56789, d: 0x0abcdef)
echo lit # (a:0x1, b:0x234, c:0x56789, d:0x0ABCDEF)
var big:BigNib
copyMem(big.addr,lit.addr,nibsize)
cast[var array[nibsize,byte]](big.addr).reverse
echo big # (d:0x5DEBC0A, c:0x8967F, b:0x123, a:0x4)
It's less than optimal to copy the bytes over, then rearrange them with reverse, anyway, so you might just want to copy the bytes over in a loop. Here's a proc that can swap the endianness of any object, (including ones for which sizeof is not known at compiletime):
template asBytes[T](x:var T):ptr UncheckedArray[byte] =
cast[ptr UncheckedArray[byte]](x.addr)
proc swapEndian[T,U](src:var T,dst:var U) =
assert sizeof(src) == sizeof(dst)
let len = sizeof(src)
for i in 0..<len:
dst.asBytes[len - i - 1] = src.asBytes[i]
Bit fields are supported in Nim as a set of enums:
type
MyFlag* {.size: sizeof(cint).} = enum
A
B
C
D
MyFlags = set[MyFlag]
proc toNum(f: MyFlags): int = cast[cint](f)
proc toFlags(v: int): MyFlags = cast[MyFlags](v)
assert toNum({}) == 0
assert toNum({A}) == 1
assert toNum({D}) == 8
assert toNum({A, C}) == 5
assert toFlags(0) == {}
assert toFlags(7) == {A, B, C}
For arbitrary bit operations you have the bitops module, and for endianness conversions you have the endians module. But you already know about the endians module, so it's not clear what problem you are trying to solve with the so called byte reversal. Usually you have an integer, so you first convert the integer to byte endian format, for instance, then save that. And when you read back, convert from byte endian format and you have the int. The endianness procs should be dealing with reversal or not of bytes, so why do you need to do one yourself? In any case, you can follow the source hyperlink of the documentation and see how the endian procs are implemented. This can give you an idea of how to cast values in case you need to do some yourself.
Since you know C maybe the last resort would be to write a few serialization functions and call them from Nim, or directly embed them using the emit pragma. However this looks like the least cross platform and pain free option.
Can't answer anything about generic data structure serialization libraries. I stray away from them because they tend to require hand holding imposing certain limitations on your code and depending on the feature set, a simple refactoring (changing field order in your POD) may destroy the binary compatibility of the generated output without you noticing it until runtime. So you end up spending additional time writing unit tests to verify that the black box you brought in to save you some time behaves as you want (and keeps doing so across refactorings and version upgrades!).

Fortran CHARACTER FUNCTION without defined size [duplicate]

I am writing the following simple routine:
program scratch
character*4 :: word
word = 'hell'
print *, concat(word)
end program scratch
function concat(x)
character*(*) x
concat = x // 'plus stuff'
end function concat
The program should be taking the string 'hell' and concatenating to it the string 'plus stuff'. I would like the function to be able to take in any length string (I am planning to use the word 'heaven' as well) and concatenate to it the string 'plus stuff'.
Currently, when I run this on Visual Studio 2012 I get the following error:
Error 1 error #6303: The assignment operation or the binary
expression operation is invalid for the data types of the two
operands. D:\aboufira\Desktop\TEMP\Visual
Studio\test\logicalfunction\scratch.f90 9
This error is for the following line:
concat = x // 'plus stuff'
It is not apparent to me why the two operands are not compatible. I have set them both to be strings. Why will they not concatenate?
High Performance Mark's comment tells you about why the compiler complains: implicit typing.
The result of the function concat is implicitly typed because you haven't declared its type otherwise. Although x // 'plus stuff' is the correct way to concatenate character variables, you're attempting to assign that new character object to a (implictly) real function result.
Which leads to the question: "just how do I declare the function result to be a character?". Answer: much as you would any other character variable:
character(len=length) concat
[note that I use character(len=...) rather than character*.... I'll come on to exactly why later, but I'll also point out that the form character*4 is obsolete according to current Fortran, and may eventually be deleted entirely.]
The tricky part is: what is the length it should be declared as?
When declaring the length of a character function result which we don't know ahead of time there are two1 approaches:
an automatic character object;
a deferred length character object.
In the case of this function, we know that the length of the result is 10 longer than the input. We can declare
character(len=LEN(x)+10) concat
To do this we cannot use the form character*(LEN(x)+10).
In a more general case, deferred length:
character(len=:), allocatable :: concat ! Deferred length, will be defined on allocation
where later
concat = x//'plus stuff' ! Using automatic allocation on intrinsic assignment
Using these forms adds the requirement that the function concat has an explicit interface in the main program. You'll find much about that in other questions and resources. Providing an explicit interface will also remove the problem that, in the main program, concat also implicitly has a real result.
To stress:
program
implicit none
character(len=[something]) concat
print *, concat('hell')
end program
will not work for concat having result of the "length unknown at compile time" forms. Ideally the function will be an internal one, or one accessed from a module.
1 There is a third: assumed length function result. Anyone who wants to know about this could read this separate question. Everyone else should pretend this doesn't exist. Just like the writers of the Fortran standard.

Arduino and TinyGPS++ convert lat and long to a string

I' m having a problem parsing the lat and long cords from TinyGPS++ to a Double or a string. The code that i'm using is:
String latt = ((gps.location.lat(),6));
String lngg = ((gps.location.lng(),6));
Serial.println(latt);
Serial.println(lngg);
The output that i'm getting is:
0.06
Does somebody know what i'm doing wrong? Does it have something to do with rounding? (Math.Round) function in Arduino.
Thanks!
There are two problems:
1. This does not compile:
String latt = ((gps.location.lat(),6));
The error I get is
Wouter.ino:4: warning: left-hand operand of comma has no effect
Wouter:4: error: invalid conversion from 'int' to 'const char*'
Wouter:4: error: initializing argument 1 of 'String::String(const char*)'
There is nothing in the definition of the String class that would allow this statement. I was unable to reproduce printing values of 0.06 (in your question) or 0.006 (in a later comment). Please edit your post to have the exact code that compiles, runs and prints those values.
2. You are unintentionally using the comma operator.
There are two places a comma can be used: to separate arguments to a function call, and to separate multiple expressions which evaluate to the last expression.
You're not calling a function here, so it is the latter use. What does that mean? Here's an example:
int x = (1+y, 2*y, 3+(int)sin(y), 4);
The variable x will be assigned the value of the last expression, 4. There are very few reasons that anyone would actually use the comma operator in this way. It is much more understandable to write:
int x;
1+y; // Just a calculation, result never used
2*y; // Just a calculation, result never used
3 + (int) sin(y); // Just a calculation, result never used
x = 4; // A (trivial) calculation, result stored in 'x'
The compiler will usually optimize out the first 3 statements and only generate code for the last one1. I usually see the comma operator in #define macros that are trying to avoid multiple statements.
For your code, the compiler sees this
((gps.location.lat(),6))
And evaluates it as a call to gps.location.lat(), which returns a double value. The compiler throws this value away, and even warns you that it "has no effect."
Next, it sees a 6, which is the actual value of this expression. The parentheses get popped, leaving the 6 value to be assigned to the left-hand side of the statement, String latt =.
If you look at the declaration of String, it does not define how to take an int like 6 and either construct a new String, or assign it 6. The compiler sees that String can be constructed from const char *, so it tells you that it can't convert a numeric 6 to a const char *.
Unlike a compiler, I think I can understand what you intended:
double latt = gps.location.lat();
double lngg = gps.location.lon();
Serial.println( latt, 6 );
Serial.println( lngg, 6 );
The 6 is intended as an argument to Serial.println. And those arguments are correctly separated by a comma.
As a further bonus, it does not use the String class, which will undoubtedly cause headaches later. Really, don't use String. Instead, hold on to numeric values, like ints and floats, and convert them to text at the last possible moment (e.g, with println).
I have often wished for a compiler that would do what I mean, not what I say. :D
1 Depending on y's type, evaluating the expression 2*y may have side effects that cannot be optimized away. The streaming operator << is a good example of a mathematical operator (left shift) with side effects that cannot be optimized away.
And in your code, calling gps.location.lat() may have modified something internal to the gps or location classes, so the compiler may not have optimized the function call away.
In all cases, the result of the call is not assigned because only the last expression value (the 6) is used for assignment.

Maximum size/offset of streamed File using Ada.Streams.Stream_IO.Read

I try to read specific blocks of Data (around 4096 bytes) from a (possibly) huge file.
Using Ada.Streams.Stream_IO.Read() with the GNAT compiler, what would be the maximum offset that I could use? That is, if I wanted to read the last 4 kilobytes of the file, using
type Block_Array is array (1..4096) of Positive;
...
Ada.Streams.Stream_IO.Read(File, Block_Array, Last, Offset);
how big could the Offset be (and therefore the file)?
Doing a bit of research, Offset seems in GNAT to be defined as 2 ** mod Standard'Address_Size [1], which would be 2^32 on a 32-bit machine. It is not absolutely clear to me if this refers to bits, bytes, kilobytes or even some obscure multiple.
Supposing that it means bytes, wouldn't that mean the biggest file I could handle would be 32 gigabytes ((2^32*8)/1024^3) large? If so, is there a way to make this larger?
Since it was suggested that I haven't checked the (language) reference manual, here is the research that lead me to the question in the first place:
In [2] the read procedure is defined as:
procedure Read (File : in File_Type;
Item : out Stream_Element_Array;
Last : out Stream_Element_Offset;
From : in Positive_Count);
A little further up:
type Count is range 0 .. *implementation-defined*;
subtype Positive_Count is Count range 1 .. Count'Last;
As one can see, the actual range of Count is implementation-defined. Since I am using the GNAT compiler (see above), I checked [1]. This states that
The Standard I/O packages described in Annex A for
[...] Ada.Stream_IO [...]
are implemented using the C library streams facility; where
[...] All input/output operations use fread/fwrite.
In the same documentation below
function fread
(buffer : voids;
size : size_t;
count : size_t;
stream : FILEs)
where
type size_t is mod 2 ** Standard'Address_Size;
Again, Standard'Address_Size would be 32 on a 32-bit machine (I've also checked prior to asking that this is the case on my computer). I am also still not sure after reading both the language reference manual AND the implementation documentation of GNAT if Stream_Element_Offset refers to bytes or something other.
But again, supposing that it means bytes, wouldn't that mean the biggest file I could handle would be 32 gigabytes ((2^32*8)/1024^3) large? If so, is there a way to make this larger?
[1]: The Implementation of Standard I/O - GNAT Reference Manual
[2]: Ada Reference Manual - A.12.1 The Package Streams.Stream_IO
On Mac OS X, with FSF GCC 5.1.0, there is
procedure Read
(File : File_Type;
Item : out Stream_Element_Array;
Last : out Stream_Element_Offset;
From : Positive_Count);
where
type Count is new Stream_Element_Offset
range 0 .. Stream_Element_Offset’Last;
subtype Positive_Count is Count range 1 .. Count’Last; --'
-- Index into file, in stream elements
and (in Ada.Streams)
type Stream_Element_Offset is new Long_Long_Integer;
which is 64 bits .. should be enough.
However, as Alex points out, GNAT GPL 2014 has
type Stream_Element_Offset is range
-(2 ** (Standard'Address_Size - 1)) ..
+(2 ** (Standard'Address_Size - 1)) - 1;
which means that, on a 32-bit machine, you’re limited to 2 gigabyte files.
The latest FSF GCC sources (as for 5.1.0 above) have been changed; we’ll have to wait until GNAT GPL 2015 to see which is definitive.
As a further cause for concern, the GNAT GPL 2014 code for Ada.Streams.Stream_IO.Set_Position (an internal subprogram) is
procedure Set_Position (File : File_Type) is
use type System.CRTL.long;
use type System.CRTL.ssize_t;
R : int;
begin
if Standard'Address_Size = 64 then
R := fseek64 (File.Stream,
System.CRTL.ssize_t (File.Index) - 1, SEEK_SET);
else
R := fseek (File.Stream,
System.CRTL.long (File.Index) - 1, SEEK_SET);
end if;
if R /= 0 then
raise Use_Error;
end if;
end Set_Position;
whereas the GCC 5.1.0 version (which has no alternative implementations) is
procedure Set_Position (File : File_Type) is
use type System.CRTL.int64;
R : int;
begin
R := fseek64 (File.Stream, System.CRTL.int64 (File.Index) - 1, SEEK_SET);
if R /= 0 then
raise Use_Error;
end if;
end Set_Position;
If your system has fseek64() - or possibly fseeko(), which takes an off_t rather than a long for the offset parameter - and friends (I think it must, looking at the code above) I would think it wouldn’t be too hard to write your own version of Ada.Streams.Stream_IO to always use the 64-bit functions. Probably easiest to call it My_Stream_IO and grit your teeth about compiler warnings on use of internal GNAT units rather than try to slot it into the Ada hierarchy.
If you want to know the range of a scalar type, you can access the minimum and maximum values using <type>'First and <type>'Last:
with Ada.Text_IO;
procedure Check_Range is
use Ada.Text_IO;
subtype Type_To_Check is Integer;
begin
Put (Type_To_Check'Image (Type_To_Check'First));
Put (" .. ");
Put (Type_To_Check'Image (Type_To_Check'Last));
New_Line;
end Check_Range;
If you want to access larger files than what the standard library supports, you can import the operating system functions giving access to the largest files allowed by your operating system.
The details of importing functions from other languages is specified in annex B in the language reference manual.

C++11 numeric_limits<>::max() at compile time

When I was trying to get something to compile on MSVC, I found out (sigh) that the version used in Visual Studio Express 2013 doesn't support constexpr functions yet. Therefore, I couldn't pass the result of std::numeric_limits<size_t>::max() to a template. I can hack around this by just casting -1 to size_t, but I don't think this would strictly be portable, as (correct me if I'm wrong) the two's complement way of defining negatives isn't standardized (yet).
What is the recommended approach on doing this?
The boost integer library is ported to many platforms and has a max constant: boost::integer_traits<size_t>::const_max.
template <class T, class Enable = std::enable_if_t<std::is_unsigned<T>::value>>
struct NumericLimits {
static const T sk_min_ = 0;
static const T sk_max_ = static_cast<T>(-1);
};
usage as a template parameter:
template <class T, T N>
class X{};
template <class T>
class Y {
// to instantiate X here you need a templated way
// to get the max value of `T` because you don't know
// what `T` actually is so you can't use something like INT_MAX
X<T, NumericLimits<T>::sk_max_> x_;
};
auto y = Y<unsigned long>{};
An unsigned variable is guaranteed by the standard to wrap around. So if you want to use a general way to get the max value of an unsigned type (including as a template argument) the above code will work.
For signed types I don't know if there is a conforming way other than specializing for each type (there are only a few (char, signed char, short, int, long, long long so that is duable too). (and I mean use INT_MAX, not hard coded values as you don't know the range of the implementation)
Please note that I've test it on g++ with c++14 so it might need little tweaks to work on Visual Studio.

Resources