I just wonder how I can efficiently pass a reference to const pointer to an object of a class.
For example,
class BigData
{
public:
int m[1000];
};
void Func(const BigData* const& bigData)
{
// just read bigData; No modification on bigData.
}
int main()
{
BigData* bigData = new BigData();
Func(bigData);
}
Above example, I do not quite understand why I have to put const before reference(&).
If I try to build it without the reference, the compiler complains about
cannot convert parameter 1 from 'BigData *' to 'const BigData *&'
Seems like it is related to R-value rule but I don't know what rule exactly governs this case.
TIA
}
Don't. Just pass the pointer by value. Syntactically it is easier for the called function to use, and will have less overhead.
void Func(const BigData *bigData)
Related
For example, consider the following C# code:
interface IBase { void f(int); }
interface IDerived : IBase { /* inherits f from IBase */ }
...
void SomeFunction()
{
IDerived o = ...;
o.f(5);
}
I know how to get a MethodDefinition object corresponding to SomeFunction.
I can then loop through MethodDefinition.Instructions:
var methodDef = GetMethodDefinitionOfSomeFunction();
foreach (var instruction in methodDef.Body.Instructions)
{
switch (instruction.Operand)
{
case MethodReference mr:
...
break;
}
yield return memberRef;
}
And this way I can find out that the method SomeFunction calls the function IBase.f
Now I would like to know the declared type of the object on which the function f is called, i.e. the declared type of o.
Inspecting mr.DeclaringType does not help, because it returns IBase.
This is what I have so far:
TypeReference typeRef = null;
if (instruction.OpCode == OpCodes.Callvirt)
{
// Identify the type of the object on which the call is being made.
var objInstruction = instruction;
if (instruction.Previous.OpCode == OpCodes.Tail)
{
objInstruction = instruction.Previous;
}
for (int i = mr.Parameters.Count; i >= 0; --i)
{
objInstruction = objInstruction.Previous;
}
if (objInstruction.OpCode == OpCodes.Ldloc_0 ||
objInstruction.OpCode == OpCodes.Ldloc_1 ||
objInstruction.OpCode == OpCodes.Ldloc_2 ||
objInstruction.OpCode == OpCodes.Ldloc_3)
{
var localIndex = objInstruction.OpCode.Op2 - OpCodes.Ldloc_0.Op2;
typeRef = locals[localIndex].VariableType;
}
else
{
switch (objInstruction.Operand)
{
case FieldDefinition fd:
typeRef = fd.DeclaringType;
break;
case VariableDefinition vd:
typeRef = vd.VariableType;
break;
}
}
}
where locals is methodDef.Body.Variables
But this is, of course, not enough, because the arguments to a function can be calls to other functions, like in f(g("hello")). It looks like the case above where I inspect previous instructions must repeat the actions of the virtual machine when it actually executes the code. I do not execute it, of course, but I need to recognize function calls and replace them and their arguments with their respective returns (even if placeholders). It looks like a major pain.
Is there a simpler way? Maybe there is something built-in already?
I am not aware of an easy way to achieve this.
The "easiest" way I can think of is to walk the stack and find where the reference used as the target of the call is pushed.
Basically, starting from the call instruction go back one instruction at a time taking into account how each one affects the stack; this way you can find the exact instruction that pushes the reference used as the target of the call (a long time ago I wrote something like that; you can use the code at https://github.com/lytico/db4o/blob/master/db4o.net/Db4oTool/Db4oTool/Core/StackAnalyzer.cs as inspiration).
You'll need also to consider scenarios in which the pushed reference is produced through a method/property; for example, SomeFunction().f(5). In this case you may need to evaluate that method to find out the actual type returned.
Keep in mind that you'll need to handle a lot of different cases; for example, imagine the code bellow:
class Utils
{
public static T Instantiate<T>() where T : new() => new T();
}
class SomeType
{
public void F(int i) {}
}
class Usage
{
static void Main()
{
var o = Utils.Instantiate<SomeType>();
o.F(1);
}
}
while walking the stack you'll find that o is the target of the method call; then you'll evaluate Instantiate<T>() method and will find that it returns new T() and knowing that T is SomeType in this case, that is the type you're looking for.
So the answer of Vagaus helped me come up with a working implementation.
I published it on github - https://github.com/MarkKharitonov/MonoCecilExtensions
Included many unit tests, but I am sure I missed some cases.
I was wondering how I could change the code below such the bmBc is computed at compile time . The one below works for runtime but it is not ideal since I need to know the bmBc table at compile-time . I could appreciate advice on how I could improve on this.
import std.conv:to;
import std.stdio;
int [string] bmBc;
immutable string pattern = "GCAGAGAG";
const int size = to!int(pattern.length);
struct king {
void calculatebmBc(int i)()
{
static if ( i < size -1 )
bmBc[to!string(pattern[i])]=to!int(size-i-1);
// bmBc[pattern[i]] ~= i-1;
calculatebmBc!(i+1)();
}
void calculatebmBc(int i: size-1)() {
}
}
void main(){
king myKing;
const int start = 0;
myKing.calculatebmBc!(start)();
//1. enum bmBcTable = bmBc;
}
The variables bmBc and bmh can't be read at compile time because you define them as regular runtime variables.
You need to define them as enums, or possibly immutable, to read them at compile time, but that also means that you cannot modify them after initialization. You need to refactor your code to return values instead of using out parameters.
Alternatively, you can initialize them at runtime inside of a module constructor.
What is the difference between these two function signatures?
function f(?i:Int = 0) {}
function f(i:Int = 0) {}
It doesn't seem to make any difference whether argument is prefixed with ?, both compile fine.
There is indeed no reason to use ? in this example, but there is a difference.
On a statically typed target, f(null) would not compile since the basic types Int, Float and Bool are not nullable there. However, the ? implies nullability, meaning that
function f(?i:Int)
is actually the same as
function f(i:Null<Int> = null)
As you can see, the ? has two effects:
A null default value is added, so you can omit i during the call: f();
The type is wrapped in Null<T>. While this makes no difference on dynamic targets, it usually has a runtime performance cost on static targets (again: only for Int / Float / Bool arguments).
The only reason I can think of why you would want arguments with basic types to be nullable is to enable optional argument skipping. When calling f in this example, i can only be skipped if it is nullable:
class Main {
static function main() {
f("Test"); // 0, "Test"
}
static function f(?i:Int = 0, s:String) {
trace(i, s);
}
}
Note that if you add a default value to an optional argument, that value will be used even if you explicitly pass null:
class Main {
static function main() {
f(); // 0
f(null); // 0
}
static function f(?i:Int = 0) {
trace(i);
}
}
They are two different things. A ? Means an optional parameter. So if can be completely excluded from the function call and no substitution will take place.
(x:Float = 12) is a default parameter. Meaning if its excluded from the function call the value of 12 will be used.
I'm trying to recompile older code in latest Visual Studio (2008) and code that worked previously now fails to compile. One of the problems is due to overloaded operators for my class. below there is simplified class to demonstrate the problem. If I remove casting operators for int and char* then it works fine. So one of the ways to fix my issue is to replace them with procedures to_char and to_int and use them instead but it will require a lot of changes in the code (that class is heavily used). There must be some better, smarter way to fix it. any help is greatly appreciated :-)
class test
{
public:
test();
test(char* s2);
test(int num);
test(test &source);
~test();
operator char*();
operator int();
};
test::test() {
}
test::test(char* s2) {
}
test::test(int num) {
}
test::test(test &source) {
}
test::operator char*() {
}
test::operator int() {
}
test test_proc() {
test aa;
return aa;
}
int test_proc2(test aa)
{
return 0;
}
int main()
{
test_proc2(test_proc());
}
//test.cpp(60) : error C2664: 'test_proc2' : cannot convert parameter 1 from 'test' to 'test'
// Cannot copy construct class 'test' due to ambiguous copy constructors or no available copy constructor
Try changing
test(test &source);
to
test(const test &source);
The issue is that the test_proc call returns a temporary test object, which can be passed to a function that accepts a const reference, but not a plain reference.
I would like to see a simple example of how to override stdext::hash_compare properly, in order to define a new hash function and comparison operator for my own user-defined type. I'm using Visual C++ (2008).
This is how you can do it
class MyClass_Hasher {
const size_t bucket_size = 10; // mean bucket size that the container should try not to exceed
const size_t min_buckets = (1 << 10); // minimum number of buckets, power of 2, >0
MyClass_Hasher() {
// should be default-constructible
}
size_t operator()(const MyClass &key) {
size_t hash_value;
// do fancy stuff here with hash_value
// to create the hash value. There's no specific
// requirement on the value.
return hash_value;
}
bool operator()(const MyClass &left, const MyClass &right) {
// this should implement a total ordering on MyClass, that is
// it should return true if "left" precedes "right" in the ordering
}
};
Then, you can just use
stdext::hash_map my_map<MyClass, MyValue, MyClass_Hasher>
Here you go, example from MSDN
I prefer using a non-member function.
The method expained in the Boost documentation article Extending boost::hash for a custom data type seems to work.