Is there an additional runtime cost for using named parameters? - c#-4.0

Consider the following struct:
public struct vip
{
string email;
string name;
int category;
public vip(string email, int category, string name = "")
{
this.email = email;
this.name = name;
this.category = category;
}
}
Is there a performance difference between the following two calls?
var e = new vip(email: "foo", name: "bar", category: 32);
var e = new vip("foo", 32, "bar");
Is there a difference if there are no optional parameters defined?

I believe none. It's only a language/compiler feature, call it syntactic sugar if you like. The generated CLR code should be the same.

There's a compile-time cost, but not a runtime one...and the compile time is very, very minute.
Like extension methods or auto-implemented properties, this is just magic the compiler does, but in reality generates the same IL we're all familiar with and have been using for years.
Think about it this way, if you're using all the parameters, the compiler would call the method using all of them, if not, it would generate something like this behind the scenes:
var e = new vip(email: "foo", category: 32); //calling
//generated, this is what it's actually saving you from writing
public vip(string email, int category) : this(email, category, "bar") { }

No it is a compile-time feature only. If you inspect the generated IL you'll see no sign of the named parameters. Likewise, optional parameters is also a compile-time feature.
One thing to keep in mind regarding named parameters is that the names are now part of the signature for calling a method (if used obviously) at compile time. I.e. if names change the calling code must be changed as well if you recompile. A deployed assembly, on the other hand, will not be affected until recompiled, as the names are not present in the IL.

There shouldn't be any. Basically, named parameters and optional parameters are syntactic sugar; the compiler writes the actual values or the default values directly into the call site.
EDIT: Note that because they are a compiler feature, this means that changes to the parameters only get updated if you recompile the "clients". So if you change the default value of an optional parameter, for example, you will need to recompile all "clients", or else they will use the old default value.

Actually, there is cost at x64 CLR
Look at here http://www.dotnetperls.com/named-parameters
I am able to reproduce the result: named call takes 4.43 ns, and normal call takes 3.48 ns
(program runs in x64)
However, in x86, both take around 0.32 ns
The code is attached below, compile and run it yourself to see the difference.
Note that in VS2012 the default targat is AnyCPU x86 prefered, you have to switch to x64 to see the difference.
using System;
using System.Diagnostics;
class Program
{
const int _max = 100000000;
static void Main()
{
Method1();
Method2();
var s1 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method1();
}
s1.Stop();
var s2 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method2();
}
s2.Stop();
Console.WriteLine(((double)(s1.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.WriteLine(((double)(s2.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.Read();
}
static void Method1()
{
Method3(flag: true, size: 1, name: "Perl");
}
static void Method2()
{
Method3(1, "Perl", true);
}
static void Method3(int size, string name, bool flag)
{
if (!flag && size != -1 && name != null)
{
throw new Exception();
}
}
}

Related

Enum class to string

I’m having a solution with switch cases but there are many cases so clang-tidy is giving warning for that function. My motive is to decrease the size of function. Is there any way that we can do to decrease size of function.
As enum class can be used as key for std::map, You can use the map to keep relation of enum <-> string, like this:
enum class test_enum { first, second, third };
const char* to_string(test_enum val) {
static const std::map<test_enum,const char*> dict = {
{ test_enum::first, "first" },
{ test_enum::second, "second" },
{ test_enum::third, "third" }
};
auto tmp = dict.find(val);
return (tmp != dict.end()) ? tmp->second : "<unknown>";
}
C++ has no reflection, so map cannot be filled automatically; however, using compiler-specific extensions (e.g. like __PRETTY_FUNCTION__ extension for GCC) it can be done, e.g. like in magic_enum library

Is it possible in Mono.Cecil to determine the actual type of an object on which a method is called?

For example, consider the following C# code:
interface IBase { void f(int); }
interface IDerived : IBase { /* inherits f from IBase */ }
...
void SomeFunction()
{
IDerived o = ...;
o.f(5);
}
I know how to get a MethodDefinition object corresponding to SomeFunction.
I can then loop through MethodDefinition.Instructions:
var methodDef = GetMethodDefinitionOfSomeFunction();
foreach (var instruction in methodDef.Body.Instructions)
{
switch (instruction.Operand)
{
case MethodReference mr:
...
break;
}
yield return memberRef;
}
And this way I can find out that the method SomeFunction calls the function IBase.f
Now I would like to know the declared type of the object on which the function f is called, i.e. the declared type of o.
Inspecting mr.DeclaringType does not help, because it returns IBase.
This is what I have so far:
TypeReference typeRef = null;
if (instruction.OpCode == OpCodes.Callvirt)
{
// Identify the type of the object on which the call is being made.
var objInstruction = instruction;
if (instruction.Previous.OpCode == OpCodes.Tail)
{
objInstruction = instruction.Previous;
}
for (int i = mr.Parameters.Count; i >= 0; --i)
{
objInstruction = objInstruction.Previous;
}
if (objInstruction.OpCode == OpCodes.Ldloc_0 ||
objInstruction.OpCode == OpCodes.Ldloc_1 ||
objInstruction.OpCode == OpCodes.Ldloc_2 ||
objInstruction.OpCode == OpCodes.Ldloc_3)
{
var localIndex = objInstruction.OpCode.Op2 - OpCodes.Ldloc_0.Op2;
typeRef = locals[localIndex].VariableType;
}
else
{
switch (objInstruction.Operand)
{
case FieldDefinition fd:
typeRef = fd.DeclaringType;
break;
case VariableDefinition vd:
typeRef = vd.VariableType;
break;
}
}
}
where locals is methodDef.Body.Variables
But this is, of course, not enough, because the arguments to a function can be calls to other functions, like in f(g("hello")). It looks like the case above where I inspect previous instructions must repeat the actions of the virtual machine when it actually executes the code. I do not execute it, of course, but I need to recognize function calls and replace them and their arguments with their respective returns (even if placeholders). It looks like a major pain.
Is there a simpler way? Maybe there is something built-in already?
I am not aware of an easy way to achieve this.
The "easiest" way I can think of is to walk the stack and find where the reference used as the target of the call is pushed.
Basically, starting from the call instruction go back one instruction at a time taking into account how each one affects the stack; this way you can find the exact instruction that pushes the reference used as the target of the call (a long time ago I wrote something like that; you can use the code at https://github.com/lytico/db4o/blob/master/db4o.net/Db4oTool/Db4oTool/Core/StackAnalyzer.cs as inspiration).
You'll need also to consider scenarios in which the pushed reference is produced through a method/property; for example, SomeFunction().f(5). In this case you may need to evaluate that method to find out the actual type returned.
Keep in mind that you'll need to handle a lot of different cases; for example, imagine the code bellow:
class Utils
{
public static T Instantiate<T>() where T : new() => new T();
}
class SomeType
{
public void F(int i) {}
}
class Usage
{
static void Main()
{
var o = Utils.Instantiate<SomeType>();
o.F(1);
}
}
while walking the stack you'll find that o is the target of the method call; then you'll evaluate Instantiate<T>() method and will find that it returns new T() and knowing that T is SomeType in this case, that is the type you're looking for.
So the answer of Vagaus helped me come up with a working implementation.
I published it on github - https://github.com/MarkKharitonov/MonoCecilExtensions
Included many unit tests, but I am sure I missed some cases.

how to check which constraints would be violated by a presumed solution?

In some cases the solver fails to find a solution for my model, which I think is there.
So I would like to populate a solution, and then check which constraint is violated.
How to do that with choco-solver?
Using choco-solver 4.10.6.
Forcing a solution
I ended up adding constraints to force variables to values of my presumed solution:
e.g.
// constraints to force given solution
vehicle2FirstStop[0].eq(model.intVar(4)).post();
vehicle2FirstStop[1].eq(model.intVar(3)).post();
nextStop[1].eq(model.intVar(0)).post();
nextStop[2].eq(model.intVar(1)).post();
...
and then
model.getSolver().showContradiction();
if (model.getSolver().solve()) { ....
Shows the first contradiction of the presumed solution, e.g.
/!\ CONTRADICTION (PropXplusYeqZ(sum_exp_49, mul_exp_51, ...
So the next step is to find out where terms such as sum_exp_49 come from.
Matching the contradiction terms with the code
Here is a simple fix for constraints which will hopefully provide enough information. We can override the post() and associates() methods of model, so that it dumps the java source filename and line number when a constraint is posted/variable is created.
Model model = new Model("Vrp1RpV") {
/**
* retrieve the filename and line number of first caller outside of choco-solver from stacktrace
*/
String getSource() {
String source = null;
StackTraceElement[] stackTraceElements = Thread.currentThread().getStackTrace();
// starts from 3: thread.getStackTrace() + this.getSource() + caller (post() or associates())
for (int i = 3; i < stackTraceElements.length; i++) {
// keep rewinding until we get out of choco-solver packages
if (!stackTraceElements[i].getClassName().toString().startsWith("org.chocosolver")) {
source = stackTraceElements[i].getFileName() + ":" + stackTraceElements[i].getLineNumber();
break;
}
}
return source;
}
#Override
public void post(Constraint... cs) throws SolverException {
String source=getSource();
// dump each constraint along source location
for (Constraint c : cs) {
System.err.println(source + " post: " + c);
}
super.post(cs);
}
#Override
public void associates(Variable variable) {
System.err.println(getSource() + " associates: " + variable.getName());
super.associates(variable);
}
};
This will dump things like:
Vrp1RpV2.java:182 post: ARITHM ([prop(EQ_exp_47.EQ.mul_exp_48)])
Vrp1RpV2.java:182 associates: sum_exp_49
Vrp1RpV2.java:182 post: ARITHM ([prop(mul_exp_48.EQ.sum_exp_49)])
Vrp1RpV2.java:182 associates: EQ_exp_50
Vrp1RpV2.java:182 post: BASIC_REIF ([(stop2vehicle[2] = 1) <=> EQ_exp_50])
...
From there it is possible to see where sum_exp_49 comes from.
EDIT: added associates() thanks to #cprudhom suggestion on https://gitter.im/chocoteam/choco-solver

What is the practical difference between dynamic and T in C#

I read about Type dynamic in C# 2010. (the corresponding msdn entry)
I am confused about the practical difference between T and dynamic while developing generic functions. My current tests didn't show new ways to use dynamica way, that isn't possible with T. In addition it seems, that dynamic needs much more time in runtime to achieve the same tasks.
Some example code to make my point clear:
// Sample Object
public class SampleObj
{
public string Test { get; set; }
}
My Test-Method (measuring speed as well):
static void Main(string[] args)
{
var sampleObj1 = new SampleObj { Test = "Test1" };
var sampleObj2 = new SampleObj { Test = "Test2" };
Stopwatch c1 = Stopwatch.StartNew();
bool res1 = CompareObjectsT<SampleObj>(sampleObj1, sampleObj2);
c1.Stop();
Console.WriteLine("Comparison T: {0} time: {1} ms", res1, c1.ElapsedMilliseconds);
Stopwatch c2 = Stopwatch.StartNew();
bool res2 = CompareObjectsDyna(sampleObj1, sampleObj2);
Console.WriteLine("Comparison dynamic: {0} time: {1} ms", res2, c2.ElapsedMilliseconds);
c2.Stop();
var instance = new X<SampleObj>();
Console.ReadLine();
}
The result is:
Comparison T: False time: 6 ms
Comparison dynamic: False time: 3969 ms
The dynamic functions needs much more time, compared to the T comparison.
Decompiling my test application reveals heavy usage of reflection which may lead to this huge amount of time:
private static bool CompareObjectsDyna([Dynamic] object source1, [Dynamic] object source2)
{
if (<CompareObjectsDyna>o__SiteContainer2.<>p__Site3 == null)
{
<CompareObjectsDyna>o__SiteContainer2.<>p__Site3 = CallSite<Func<CallSite, object, bool>>.Create(Binder.Convert(CSharpBinderFlags.None, typeof(bool), typeof(Program)));
}
if (<CompareObjectsDyna>o__SiteContainer2.<>p__Site4 == null)
{
<CompareObjectsDyna>o__SiteContainer2.<>p__Site4 = CallSite<Func<CallSite, Type, object, object, object>>.Create(Binder.InvokeMember(CSharpBinderFlags.None, "Equals", null, typeof(Program), new CSharpArgumentInfo[] { CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.IsStaticType | CSharpArgumentInfoFlags.UseCompileTimeType, null), CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null), CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null) }));
}
return <CompareObjectsDyna>o__SiteContainer2.<>p__Site3.Target(<CompareObjectsDyna>o__SiteContainer2.<>p__Site3, <CompareObjectsDyna>o__SiteContainer2.<>p__Site4.Target(<CompareObjectsDyna>o__SiteContainer2.<>p__Site4, typeof(object), source1, source2));
}
I considered this post but this hasn't ansered my question.
Can someone tell me, in what scenario dynamic is much more effective than T?
(hopefully with a small practical sample)
Short answer is that generic type T must be known at compile time, but dynamic is inferred at runtime.

How to create a method at runtime using Reflection.emit

I'm creating an object at runtime using reflection emit. I successfully created the fields, properties and get set methods.
Now I want to add a method. For the sake of simplicity let's say the method just returns a random number. How do I define the method body?
EDIT:
Yes, I've been looking at the msdn documentation along with other references and I'm starting to get my head wrapped around this stuff.
I see how the example above is adding and/or multplying, but what if my method is doing other stuff. How do I define that "stuff"
Suppose I was generating the class below dynamically, how would I create the body of GetDetails() method?
class TestClass
{
public string Name { get; set; }
public int Size { get; set; }
public TestClass()
{
}
public TestClass(string Name, int Size)
{
this.Name = Name;
this.Size = Size;
}
public string GetDetails()
{
string Details = "Name = " + this.Name + ", Size = " + this.Size.ToString();
return Details;
}
}
You use a MethodBuilder to define methods. To define the method body, you call GetILGenerator() to get an ILGenerator, and then call the Emit methods to emit individual IL instructions. There is an example on the MSDN documentation for MethodBuilder, and you can find other examples of how to use reflection emit on the Using Reflection Emit page:
public static void AddMethodDynamically(TypeBuilder myTypeBld,
string mthdName,
Type[] mthdParams,
Type returnType,
string mthdAction)
{
MethodBuilder myMthdBld = myTypeBld.DefineMethod(
mthdName,
MethodAttributes.Public |
MethodAttributes.Static,
returnType,
mthdParams);
ILGenerator ILout = myMthdBld.GetILGenerator();
int numParams = mthdParams.Length;
for (byte x = 0; x < numParams; x++)
{
ILout.Emit(OpCodes.Ldarg_S, x);
}
if (numParams > 1)
{
for (int y = 0; y < (numParams - 1); y++)
{
switch (mthdAction)
{
case "A": ILout.Emit(OpCodes.Add);
break;
case "M": ILout.Emit(OpCodes.Mul);
break;
default: ILout.Emit(OpCodes.Add);
break;
}
}
}
ILout.Emit(OpCodes.Ret);
}
It sounds like you're looking for resources on writing MSIL. One important resource is the OpCodes class, which has a member for every IL instruction. The documentation describes how each instruction works. Another important resource is either Ildasm or Reflector. These will let you see the IL for compiled code, which will help you understand what IL you want to write. Running your GetDetailsMethod through Reflector and setting the language to IL yields:
.method public hidebysig instance string GetDetails() cil managed
{
.maxstack 4
.locals init (
[0] string Details,
[1] string CS$1$0000,
[2] int32 CS$0$0001)
L_0000: nop
L_0001: ldstr "Name = "
L_0006: ldarg.0
L_0007: call instance string ConsoleApplication1.TestClass::get_Name()
L_000c: ldstr ", Size = "
L_0011: ldarg.0
L_0012: call instance int32 ConsoleApplication1.TestClass::get_Size()
L_0017: stloc.2
L_0018: ldloca.s CS$0$0001
L_001a: call instance string [mscorlib]System.Int32::ToString()
L_001f: call string [mscorlib]System.String::Concat(string, string, string, string)
L_0024: stloc.0
L_0025: ldloc.0
L_0026: stloc.1
L_0027: br.s L_0029
L_0029: ldloc.1
L_002a: ret
}
To generate a method like that dynamically, you will need to call ILGenerator.Emit for each instruction:
ilGen.Emit(OpCodes.Nop);
ilGen.Emit(OpCodes.Ldstr, "Name = ");
ilGen.Emit(OpCodes.Ldarg_0);
ilGen.Emit(OpCodes.Call, nameProperty.GetGetMethod());
// etc..
You may also want to look for introductions to MSIL, such as this one: Introduction to IL Assembly Language.

Resources