Macro to generate function implementation by its definition? - nim-lang

Would it be possible to automate writing the implementation for remote function?
If written by hand it would look like code below. Having both function declaration and implementation is important.
proc rcall*[A, B, R](fn: string, a: A, b: B, _: type[R]): R =
echo (fn, a, b)
proc multiply(a, b: int): int
proc multiply(a, b: int): int =
rcall("multiply", a, b, int)
I would like to automate it and write it as
proc rcall*[A, B, R](fn: string, a: A, b: B, _: type[R]): R =
echo (fn, a, b)
proc multiply(a, b: int): int
remotefn multiply
And the remotefn macro should look at function definition and generate its implementation as rcall("multiply", a, b, int), is that possible?

Would it be possible to automate writing the implementation ...
Yes, it would be possible to automate implementation of just about anything using nim macros that take typed argument (and then getTypeImpl)
import std/macros
macro dumpImpl(arg: typed): untyped =
echo arg.getTypeImpl().treeRepr()
proc rcall*[A, B, R](fn: string, a: A, b: B, _: type[R]): R =
echo (fn, a, b)
proc multiply(a, b: int): int
proc multiply(a, b: int): int =
rcall("multiply", a, b, int)
dumpImpl multiply
Shows that multiply type (function signature) has the following structure:
ProcTy
FormalParams
Sym "int"
IdentDefs
Sym "a"
Sym "int"
Empty
IdentDefs
Sym "b"
Sym "int"
Empty
Empty
Although it is important to keep in mind that overloaded procedures cannot be easily resolved based only on name (because, well, there are many implementations). The most obvious choice would be to still use typed argument, but pass some parameter to disambiguate the function call
import std/macros
macro dumpImpl(arg: typed): untyped =
echo arg.treeRepr()
proc overload(a: string) = discard
proc overload(a: int) = discard
dumpImpl overload
# ClosedSymChoice - lists all possible overloads with their respective symbols
# Sym "overload"
# Sym "overload"
dumpImpl overload("123")
#Call
# Sym "overload"
# StrLit "123"
dumpImpl overload(123)
#Call
# Sym "overload"
# IntLit 123
As a small (personal) side note - when you are talking about nim macros the question should mostly be not "if this is even possible?" but rather "what is the most optimal way to do this?". It might require some knowledge of macro tricks, but it is possible to implement almost anything.
EDIT1 (add implementation code example, reply to question in comments):
import std/[macros]
proc rcall*[A, B, R](fn: string, a: A, b: B, _: type[R]): R =
echo (fn, a, b)
macro remotefn(fn: typed) =
let fname = fn.str_val()
# `quote do` generates hygienic identifiers - i.e. all new
# variables/functions introduced by it are unique and not visible in
# global. To fix this you need to explicitly create identifier for your
# procedure using `ident`
let
multId = ident("multiply") # < Function name identifier
# Same goes for variables - nim does have a name-based overloading, so
# you need to make sure function arguments use the same identifiers as
# the original one
aId = ident("a")
bId = ident("b")
result = quote do:
proc `multId`(`aId`, `bId`: int): int =
rcall(`fname`, 1, 1, int)
echo result.toStrLit()
proc multiply(a, b: int): int
remotefn multiply

Related

How to do type hint when multiple return value assigned to different variable in python3

In my code I create a function foo which returns two values which I assign to two different variable but I could not define types for those two variables
def foo(a: int, b:int) -> Tuple[int, int]:
return a+5, b+5
Now When I tried to call the function I am unable to assign types to those variable.
I want something like this
a: int, b: int = foo(5, 10)
But it is not working. I am getting syntax error
Error after compiling
here is correct function
def foo(a: int, b: int) -> tuple[int, int]:
return a+5, b+5
and you can't use a: int in creating 2 or more variables using ,. just write:
a, b = foo(5, 10)

initialize an argument of a proc, but not in it's definition line

Suppose that we have an object which has some properties of type proc:
type
x = object
y: proc(a,b:int)
proc myproc(a,b:int) =
echo a
var tmp = new x
tmp.y = myproc # I want to insert initial value in this line for example a = 1
tmp.y(5)
How can I insert initial values in the specified line, and not anywhere else?
Thank you in advance
As far as I know it's not possible to do what you want without other modifications, because you have specified y to be a proc receiving two parameters. So whatever you assign to it, the compiler is always going to expect you to put two parameters at the call site.
One alternate approach would be to use default values in the proc definition:
type
x = object
y: proc(a: int = 1, b: int)
proc myproc(a,b: int) =
echo(a, " something ", b)
var tmp = new x
tmp.y = myproc
tmp.y(b = 5)
The problems with this solution are of course that you can't change the value of a at runtime, and you are forced to manually specify the name of the parameter, otherwise the compiler is going to presume you are meaning the first and forgot to specify b. Such is the life of a non dynamic language.
Another approach is to define the proc as having a single input parameter, and then using an anonymous proc or lambda to curry whatever values you want:
type
x = object
y: proc(a: int)
proc myproc(a,b: int) =
echo(a, " something ", b)
var tmp = new x
tmp.y = proc (x: int) = myproc(1, x)
tmp.y(5)
If you were to use the sugar module, as suggested in the docs, the assignment line could look like:
tmp.y = (x: int) => myproc(1, x)

Nim template where early binding is not possible?

There's the build_type template defined in the lib.nim library.
template build_type*[T](_: type[T]): T = T.build()
The object B uses that template to build object A.
And it doesn't work - because while A is visible in the b.nim, it is not visible in main.nim, where B is used.
It works if A imported into main.nim (see commented out import), but that feels wrong as it breaks encapsulation of the B internal details (as the code using B should also import A even if it doesn't use A).
I wonder if there's other way to make it work?
playground
# main.nim ------------------------------------------
import bobject #, aobject
echo B.build()
# bobject.nim ---------------------------------------
import lib, aobject
type B* = tuple[a: A]
proc build*(_: type[B]): B = (a: build_type(A))
# aobject.nim ---------------------------------------
type A* = tuple[v: int]
proc build*(_: type[A]): A = (0,)
# lib.nim -------------------------------------------
template build_type*[T](_: type[T]): T = T.build()
Compilation error:
/main.nim(3, 7) template/generic instantiation of `build` from here
/bobject.nim(5, 44) template/generic instantiation of `build_type` from here
/lib.nim(1, 43) Error: type mismatch: got <type A>
but expected one of:
proc build(_: type[B]): B
first type mismatch at position: 1
required type for _: type B
but expression 'T' is of type: type A
expression: build(T)
I would make it work changing bobject.nim to:
proc initB*(): B =
(a: build_type(A))
And fixing main.nim to:
echo initB()

python3 type hinting different type on a parameter

Let's say that my function can accept parameter b to be either an int or a list
def foo_fun(a: int, b: ??)->int:
if isinstance(b,list):
do something
else
do something else
How can i use type hinting with this particular setting?
Thanks
You seems to be describing Union.
For example:
from typing import Union
def foo_fun(a: int, b: Union[int, list]) -> int:
if isinstance(b, list):
do something
else
do something else

Can mypy select a method return type based on the type of the current object?

In the following code, calling the clone() method on an instance of A will return an instance of type A, calling it on an instance of B will return an instance of type B, and so on. The purpose is to create a new instance which is identical to the current one but has a different internally generated primary key, so it can be edited from there and safely saved as a new item.
??? is some kind of type qualifier, but I'm not sure what the right choice is.
class A:
def clone(self) -> ???:
cls = self.__class__
result = cls.__new__(cls)
for k, v in self.__dict__.items():
setattr(result, k, deepcopy(v))
result.id = self.__generate_id()
return result
class B(A):
def do_b(self):
pass
I currently replaced ??? with 'A'. This works, but if I want to clone an object of type B, I have to cast the return value if I want to call B-specific methods on it:
b = B().clone()
b.do_b() # type error!
b = cast(B, B().clone())
b.do_b() # OK
However, it's guaranteed that calling .clone() on an object of type B will return another object of type B, and this smacks rather too much of Java for my taste. Is there some generic-based way I can tell mypy that the method returns an object of type __class__, whatever that class is?
You can do this by using generic methods with a generic self -- basically, annotate your self variable as being generic:
from typing import TypeVar
T = TypeVar('T', bound='A')
class A:
def __generate_id(self) -> int:
return 0
def clone(self: T) -> T:
cls = self.__class__
result = cls.__new__(cls)
for k, v in self.__dict__.items():
setattr(result, k, deepcopy(v))
result.id = self.__generate_id()
return result
class B(A):
def do_b(self):
pass
reveal_type(A().clone()) # Revealed type is A
reveal_type(B().clone()) # Revealed type is B
Basically, when we call clone(), the T typevar will be bound to the type of the current instance when mypy tries type-checking the call to clone. This ends up making the return type match the type of the instance, whatever it is.
You may be wondering why I set the upper bound of T to A. This is because of the line self.__generate_id(). Basically, in order for that line to type-check, self can't be literally any type: it needs to be A or some subclass of A. The bound encodes this requirement.

Resources