Why do I get an unknown identifier error for the fail_test1 template and not in pass_test1?
template pass_test1 {
param len = 10;
#if (true) {
saved int8 data[len];
}
}
group pass is pass_test1;
template fail_test1 {
param len = 10;
saved int8 data[len];
}
group fail is fail_test1;
In short: in pass_test1 the saved declaration is not part of the template type, whereas in fail_test1 it is. And the error message comes from DMLC trying to parse the type of data as a member of the template type.
This happens because saved declarations are normally added to the template type: If you silence the error by s/len/10/, then you can write a run-time reference to it, as local fail_test1 x = cast(fail, fail_test1), and then you can access x.data. However, this does not work for pass_test1: you can write local pass_test1 y = cast(pass, pass_test1) but then you cannot access y.data. Conditional parts of a template cannot be part of the template's type, because there is no reasonable way for the run-time reference to handle the case when the #if condition of the referenced object is false.
A funny consequence is that your #if (true) { saved data[len]; } construct is in fact the recommended idiom for parameterized array members of templates. It looks a bit odd, but it happens to do exactly what you want, and it's uncommon enough that we probably won't invent a special syntax for it.
If you want to access the data member from the template type, then you can get the best of both worlds by writing:
template pass {
param len default 10;
#if (true) { saved int8 _data[len]; }
param data : int8*;
param data = &_data;
}
Here, data is available within the template type, and the array length is still configurable.
I'm trying to restrict the use of some user defined litteral for string to a given length
Carte operator"" _C (const char* str, std::size_t sz) {
if (sz != 2)
throw runtime_error("Wrong size of string");
return from_string(str);
}
This works perfectly except that since litteral is known at compile time, the size test could be done at that time as well. However the I can't use a static assert here
jeu.cpp:105:17: error: static_assert expression is not an integral constant expression
static_assert(sz == 2, "Wrong size of string");
^~~~~~~
jeu.cpp:105:17: note: read of non-const variable 'sz' is not allowed in a constant expression
jeu.cpp:104:51: note: declared here
Is there a way to check the size of user defined string litteral at compile time in c++11 ?
If not, is it possible with more recent standard of c++ ?
use sizeof(test) to get length.. then you can use static_assert
const char test[] = "blablalba";
static_assert (sizeof(test) == 10);
hi - I'm using the XC16 C compiler with a Microchip dsPIC33F CPU. I'm trying to use an enum to store the state of a device. I have accessors to get and set the device state. The state is (or should be) captured in a variable called "currentNodeState" which is declared with the following:
`
typedef enum NodeState currentNodeState;
I then use it in the set accessor with:
void SetNodeState(NodeState state)
{
currentNodeState = state;
}
which results in the following compiler error:Node_IO.c:168:22: error: expected identifier or '(' before '=' token
any suggestions as to why the error message
Jim
You need to define what values are legal for the enum.
Cplusplus has a good write up on enums (near the bottom of the page): http://www.cplusplus.com/doc/tutorial/other_data_types/
Here's a modified example from my code:
typedef enum
{
ADJUST_NONE,
ADJUST_FINE,
ADJUST_COARSE,
ADJUST_INVALID
} adjustment_state_t;
adjustment_state_t ADJUSTER_StateUpdate(adjustment_state_t currentState,
uint8_t thresholdViolation);
Valid inputs to the function are any of the values in the enum. ADJUST_NONE = 0, and each subsequent value is 1 higher than the last. That makes ADJUST_INVALID = 3.
OK, so here's the culprit method :
class FunctionDecl
{
// More code...
override void execute()
{
//...
writeln("Before setting... " ~ name);
Glob.functions.set(name,this);
writeln("After setting." ~ name);
//...
}
}
And here's what happens :
If omit the writeln("After setting." ~ name); line, the program crashes, just at this point
If I keep it in (using the name attribute is the key, not the writeln itself), it works just fine.
So, I suppose this is automatically garbage collected? Why is that? (A pointer to some readable reference related to GC and D would be awesome)
How can I solve that?
UPDATE :
Just tried a GC.disable() at the very beginning of my code. And... automagically, everything works again! So, that was the culprit as I had suspected. The thing is : how is this solvable without totally eliminating Garbage Collection?
UPDATE II :
Here's the full code of functionDecl.d - "unnecessary" code omitted :
//================================================
// Imports
//================================================
// ...
//================================================
// C Interface for Bison
//================================================
extern (C)
{
void* FunctionDecl_new(char* n, Expressions i, Statements s) { return cast(void*)(new FunctionDecl(to!string(n),i,s)); }
void* FunctionDecl_newFromReference(char* n, Expressions i, Expression r) { return cast(void*)(new FunctionDecl(to!string(n),i,r)); }
}
//================================================
// Functions
//================================================
class FunctionDecl : Statement
{
// .. class variables ..
this(string n, Expressions i, Statements s)
{
this(n, new Identifiers(i), s);
}
this(string n, Expressions i, Expression r)
{
this(n, new Identifiers(i), r);
}
this(string n, Identifiers i, Statements s)
{
// .. implementation ..
}
this(string n, Identifiers i, Expression r)
{
// .. implementation ..
}
// .. other unrelated methods ..
override void execute()
{
if (Glob.currentModule !is null) parentModule = Glob.currentModule.name;
Glob.functions.set(name,this);
}
}
Now as for what Glob.functions.set(name,this); does :
Glob is an instance holding global definitions
function is the class instance dealing with defined functions (it comes with a FunctionDecl[] list
set simply does that : list ~= func;
P.S. I'm 99% sure it has something to do with this one : Super-weird issue triggering "Segmentation Fault", though I'm still not sure what went wrong this time...
I think the problem is that the C function is allocating the object, but D doesn't keep a reference. If FunctionDecl_new is called back-to-back in a tight memory environment, here's what would happen:
the first one calls, creating a new object. That pointer goes into the land of C, where the D GC can't see it.
The second one goes, allocating another new object. Since memory is tight (as far as the GC pool is concerned), it tries to run a collection cycle. It finds the object from (1), but cannot find any live pointers to it, so it frees it.
The C function uses that freed object, causing the segfault.
The segfault won't always run because if there's memory to spare, the GC won't free the object when you allocate the second one, it will just use its free memory instead of collecting. That's why omitting the writeln can get rid of the crash: the ~ operator allocates, which might just put you over the edge of that memory line, triggering a collection (and, of course, running the ~ gives the gc a chance to run in the first place. If you never GC allocate, you never GC collect either - the function looks kinda like gc_allocate() { if(memory_low) gc_collect(); return GC_malloc(...); })
There's three solutions:
Immediately store a reference in the FunctionDecl_new function in a D structure, before returning:
FunctionDecl[] fdReferences;
void* FunctionDecl_new(...) {
auto n = new FunctionDecl(...);
fdReferences ~= n; // keep the reference for later so the GC can see it
return cast(void*) n;
}
Call GC.addRoot on the pointer right before you return it to C. (I don't like this solution, I think the array is better, a lot simpler.)
Use malloc to create the object to give to C:
void* FunctionDecl_new(...) {
import std.conv : emplace;
import core.stdc.stdlib : malloc;
enum size = __traits(classInstanceSize, FunctionDecl);
auto memory = malloc(size)[0 .. size]; // need to slice so we know the size
auto ref = emplace!FunctionDecl(memory, /* args to ctor */); // create the object in the malloc'd block
return memory.ptr; // give the pointer to C
}
Then, of course, you ought to free the pointer when you know it is no longer going to be used, though if you don't, it isn't really wrong.
The general rule I follow btw is any memory that crosses language barriers for storage (usage is different) ought to be allocated similarly to what that language expects: So if you pass data to C or C++, allocate it in a C fashion, e.g. with malloc. This will lead to the least surprising friction as it gets stored.
If the object is just being temporarily used, it is fine to pass a plain pointer to it, since a temp usage isn't stored or freed by the receiving function so there's less danger there. Your reference will still exist too, if nothing else, on the call stack.
I'm using Visual Studio 2012.
I made a class for a map like this:
classA
{
private:
string a;
string b;
uint32_t start;
uint32_t end;
};
and I defined a map and a function like this:
typedef std::map<uint32_t, classA, std::greater<uint32_t> > ClassAMap;
typedef ClassAMap::iterator ClassAMapIterator;
ClassAMap classAMap;
void lowerBoundTest(ClassAMap &classAMap)
{
uint32_t test = 0;
ClassAMapIterator it;
cin >> hex >> test;
it = classAMap.lower_bound(test);
cout << "lower_bound of " << test << ": "
<< it->first << endl;
}
then I got this error when dereferencing iterator of the map.
Debug Assertion Failed!
Program: C:\Windows\system32\MSVCP110D.dll
File: C:\program files (x86)\microsoft visual studio 11.0\vc\include\xtree
Line: 137
Expression: map/set iterator not dereferencable
For information on how your program can cause an assertion failure,
see the Visual C++ documentation on asserts.
(Press Retry to debug the application)
I don't get any error message when I define a map like this:
typedef std::map<uint32_t, classA> ClassAMap;
typedef ClassAMap::iterator ClassAMapIterator;
But I need to put a greater<> function for purpose.
How can I get rid of this assertion failure?
lower_bound will return the iterator equal to the map's end() when the item you're searching for isn't in the map, and it's insertion point would be at the end of the map. Since end() is one past the end of the map, it doesn't point to a valid element of the map. Trying to dereference it is an error. Be lucky that you were running this in Debug mode with a checked iterator, otherwise you'd have just gotten undefined behavior instead of a nice error message.
The reason it changes when you order the map without greater is that it changes the order of the map; lower_bound is now returning begin() instead of end(), which is a valid element.
To see whether the element was found in the map, you should use equal_range instead of lower_bound and check that the two iterators are not equal. The first of the two is identical to what lower_bound returns.