Using {:typed_struct, "~> 0.2.1"} library I have following struct:
defmodule RequestParams do
use TypedStruct
typedstruct do
field :code, String.t()
field :name, String.t()
end
end
I am trying to pattern match function parameters to struct:
def do_handle(params %RequestParams{}, _context) do
# instead of
# def do_handle(%{ "code" => code, "name" => name}, _context) do
But I get exception:
cannot find or invoke local params/2 inside match. Only macros can be
invoked in a match and they must be defined before their invocation.
What is wrong? And is it possible at all to match function parameters to struct?
Cause of the issue
In elixir, parentheses in function calls are not mandatory (although desirable.) That said, the code def foo(bar baz) do ... is being parsed and treated as def foo(bar(baz)) do ... because the parser suggests omitted parentheses in call to the function bar. You should have got a warning from the compiler, saying exactly that. Warnings are supposed to be read and eliminated.
Quick fix
As it is pointed out by #peaceful-james, pattern matching inside parentheses would do.
def do_handle(%RequestParams{} = params, _context) do
“Instead of”
You wrote
def do_handle(params %RequestParams{}, _context) do
# instead of
# def do_handle(%{ "code" => code, "name" => name}, _context) do
Even if it was syntactically correct, the code above is not equivalent to the code below. The code below would accept any map, having two keys "code" and "name", while the code above allows instances of RequestParams only. One might also make the code below more strict with:
def do_handle(%RequestParams{code: code, name: name}, _) do
Struct keys
But structs in elixir cannot have anything but atom as a key. That said, if your initial code accepted %{"code" => _} there is no way to turn it into accepting a struct without modifying the calling code.
Typed stuff
Types are not the first-class citizens in elixir. I personally find it appropriate. You should start with understanding the language, OTP principles, the paradigm of the language and only after that decide whether you want and/or need types at all.
Do you actually mean to write this:
def do_handle(%RequestParams{}=params, _context) do
Related
I want to find the caller callable from within the called object, without explcitely forwarding the caller to the called as an object.
My current code looks something like this:
class Boo:
#classmethod
def foo(cls, aa, b2=2):
_ret = aa + b2
autolog(fn=Boo.foo, values={"input": locals(), "output": _ret}, message="This is what it should look like")
autolog_nameless(values={"input": locals(), "output": _ret}, message="This would be convenient")
return _ret
and yields
DEBUG | Boo.foo with aa=3.14159, b2=2 yields 5.14159. Message: This is what it should look like
DEBUG | cls=<class '__main__.Boo'>, aa=3.14159, b2=2, _ret=5.14159 yields 5.14159. Message: This would be convenient
The method autolog gets the locals() and the caller method fn, and parses them using the signature of the caller. This works nice and provides the desired output, but requires passing the caller as an object - something I'd like to avoid as I'm refractoring to include this feature and have about 1000 places to modify.
What I'd like to achieve is: pass locals() only; get the name of the caller within autolog_nameless, using inspect.stack()[1][3] or rather inspect.currentframe().f_back.f_code.co_name (latter has much less overhead), and using this - an possibly the information in locals() - find the caller object to inspect it for its signature.
The method autolog_nameless gets cls, actually the class as part of locals() (or would get self if the caller was a simple method), but I can't really do anything with it.
I'd think all the information required is given, but I just can't find a solution. Any help is greatly appreciated.
As it turns out it's quite simple: listing the methods of the class object found in locals() and searching by name should do the trick.
Code, without error checking:
# getting all methods of the class
methods = inspect.getmembers(locals()['cls'], predicate=inspect.ismethod)
# finding the callers name; won't work within the list comprehension for scope issues
_name = inspect.currentframe().f_back.f_code.co_name
# methods is a list of tuples, each tuple holds the name and the method object
fn = [x for x in methods if x[0] == _name][0][1]
and fn is the caller object to check the signature.
Note, locals()['cls'] works here as in the example we have a classmethod, but this is just the object that the called method belongs to.
Is there a better way of doing this?
def __init__(self,**kwargs):
self.ServiceNo = kwargs["ServiceNo"]
self.Operator = kwargs["Operator"]
self.NextBus = kwargs["NextBus"]
self.NextBus2 = kwargs["NextBus2"]
self.NextBus3 = kwargs["NextBus3"]
The attributes (ServiceNo,Operator,...) always exist
That depends on what you mean by "simpler".
For example, is what you wrote simpler than what I would write, namely
def __init__(self,ServiceNo, Operator, NextBus, NextBus2, NextBus3):
self.ServiceNo = ServiceNo
self.Operator = Operator
self.NextBus = NextBus
self.NextBus2 = NextBus2
self.NextBus3 = NextBus3
True, I've repeated each attribute name an additional time, but I've made it much clearer which arguments are legal for __init__. The caller is not free to add any additional keyword argument they like, only to see it silently ignored.
Of course, there's a lot of boilerplate here; that's something a dataclass can address:
from dataclasses import dataclass
#dataclass
class Foo:
ServiceNo: int
Operator: str
NextBus: Bus
NextBus2: Bus
NextBus3: Bus
(Adjust the types as necessary.)
Now each attribute is mentioned once, and you get the __init__ method shown above for free.
Better how? You don’t really describe what problem you’re trying to solve.
If it’s error handling, you can use the dictionary .get() method in the event that key doesn’t exist.
If you just want a more succinct way of initializing variables, you could remove the ** and have the dictionary as a variable itself, then use it elsewhere in your code, but that depends on what your other methods are doing.
A hacky solution available since the attributes and the argument names match exactly is to directly copy from the kwargs dict to the instance's dict, then check that you got all the keys you expected, e.g.:
def __init__(self,**kwargs):
vars(self).update(kwargs)
if vars(self).keys() != {"ServiceNo", "Operator", "NextBus", "NextBus2", "NextBus3"}:
raise TypeError(f"{type(self).__name__} missing required arguments")
I don't recommend this; chepner's options are all superior to this sort of hackery, and they're more reliable (for example, this solution fails if you use __slots__ to prevent autovivication of attributes, as the instance won't having a backing dict you can pull with vars).
How can I cast a var into a CustomClass?
In Python, I can use float(var), int(var) and str(var) to cast a variable into primitive data types but I can't use CustomClass(var) to cast a variable into a CustomClass unless I have a constructor for that variable type.
Example with inheritance.
class CustomBase:
pass
class CustomClass(CustomBase):
def foo():
pass
def bar(var: CustomBase):
if isinstance(var, CustomClass):
# customClass = CustomClass(var) <-- Would like to cast here...
# customClass.foo() <-- to make it clear that I can call foo here.
In the process of writing this question I believe I've found a solution.
Python is using Duck-typing
Therefore it is not necessary to cast before calling a function.
Ie. the following is functionally fine.
def bar(var):
if isinstance(var, CustomClass):
customClass.foo()
I actually wanted static type casting on variables
I want this so that I can continue to get all the lovely benefits of the typing PEP in my IDE such as checking function input types, warnings for non-existant class methods, autocompleting methods, etc.
For this I believe re-typing (not sure if this is the correct term) is a suitable solution:
class CustomBase:
pass
class CustomClass(CustomBase):
def foo():
pass
def bar(var: CustomBase):
if isinstance(var, CustomClass):
customClass: CustomClass = var
customClass.foo() # Now my IDE doesn't report this method call as a warning.
I have created a list of functions and they working fine individually. However other developers have to call these functions individually.
Calling function like this. I would want to simplify more while creating module like user should call using the below line
'type' will be any of the below (mandatory)
a, b, c, d
for each type, relevant function should be called from module
'info' will be input from developer (optional)
'param' will be compulsory list for DBNAME (SQL, ORACLE, TERADATA etc) and optional for rest.
I have created below class for type. However I am unable make proper code to create above functions using IF statement using above types. How might I achieve this?
IIUC. You can use a dict to call the required function.
Ex:
def log_info(info=None):
print('log_info', info)
def log_info_DBCS(info=None):
print('log_info', info)
def log_info_DBName(param, info=None):
print('log_info_DBName', param, info)
def log_info_TableName(info=None):
print('log_info_TableName', info)
def log_info_RecordCount(info=None):
print('log_info_RecordCount', info)
def log_info_Duration(info=None):
print('log_info_Duration', info)
def call_func(func, **kargs):
f = {'log_info': log_info,
'log_info_DBCS': log_info_DBCS,
'log_info_DBName': log_info_DBName,
'log_info_TableName': log_info_TableName,
'log_info_RecordCount': log_info_RecordCount,
'log_info_Duration': log_info_Duration}
return f[func](**kargs)
typ = 'log_info_DBName'
dbname = 'SQL'
call_func(typ, **{"param":dbname})
call_func(typ, **{"param":dbname, 'info': "Hello World"})
First off, I wouldn't name anything type since that is a built-in.
Second, you don't need any if statements; it looks like the only thing that varies is the error string, so you can just stick that into the enum value, and use it as a format string:
from enum import Enum
class LogType(Enum):
Info = "cmd: {}"
DBName = "cmd: Connected to database-({})"
# .. etc.
def log_info(logtype, info):
logger.info(logtype.value.format(info))
I want to pass a string to a method/class function which resolves the correct attribute to modify. I'm pretty sure i've done this before, but I seem to have forgotten how to.
class A:
def __init__(self):
self.word = B.getWord()
self.phrase = "Some default string"
def set_dynamically(self, attribute, value):
self[attribute] = value
This would let me do something like A.set_dynamically('word', C.getWord())
I've tried searching for a question and answer for this but I'm having a hard time defining what this is called, so I didn't really find anything.
Python objects have a built-in method called __setattr__(self, name, value) that does this. You can invoke this method by calling setattr() with an object as the argument:
A = A()
setattr(A, 'word', C.getWord())
There's no reason to do this when you could just do something like A.word = C.getWord() (which, in fact, resolves down to calling __setattr__() the same way as the built-in setattr() function does), but if the property you're setting is named dynamically, then this is how you get around that limitation.
If you want to customize the behavior of how your class acts when you try to call setattr() on it (or when you try to set an attribute normally), you can override the __setattr__(self, name, value) method in much the same way as you're overriding __init__(). Be careful if you do this, because it's really easy to accidentally produce an infinite recursion error - to avoid this you can use object.__setattr__(self, name_value) inside your overridden __setattr__(self, name, value).
Just wanted to add my own solution as well. I created a mapping object;
def _mapper(self, attr, object):
m = { "funcA" : object.funcA,
"funcB" : object.funcB,
... : ...,
}
return m.get(attr)
def call_specific_func_(self, attr):
--do stuff--
for a in some-list:
attr = a.get_attr()
retvals = self._mapper(attr, a)
-- etc --