I have created a list of functions and they working fine individually. However other developers have to call these functions individually.
Calling function like this. I would want to simplify more while creating module like user should call using the below line
'type' will be any of the below (mandatory)
a, b, c, d
for each type, relevant function should be called from module
'info' will be input from developer (optional)
'param' will be compulsory list for DBNAME (SQL, ORACLE, TERADATA etc) and optional for rest.
I have created below class for type. However I am unable make proper code to create above functions using IF statement using above types. How might I achieve this?
IIUC. You can use a dict to call the required function.
Ex:
def log_info(info=None):
print('log_info', info)
def log_info_DBCS(info=None):
print('log_info', info)
def log_info_DBName(param, info=None):
print('log_info_DBName', param, info)
def log_info_TableName(info=None):
print('log_info_TableName', info)
def log_info_RecordCount(info=None):
print('log_info_RecordCount', info)
def log_info_Duration(info=None):
print('log_info_Duration', info)
def call_func(func, **kargs):
f = {'log_info': log_info,
'log_info_DBCS': log_info_DBCS,
'log_info_DBName': log_info_DBName,
'log_info_TableName': log_info_TableName,
'log_info_RecordCount': log_info_RecordCount,
'log_info_Duration': log_info_Duration}
return f[func](**kargs)
typ = 'log_info_DBName'
dbname = 'SQL'
call_func(typ, **{"param":dbname})
call_func(typ, **{"param":dbname, 'info': "Hello World"})
First off, I wouldn't name anything type since that is a built-in.
Second, you don't need any if statements; it looks like the only thing that varies is the error string, so you can just stick that into the enum value, and use it as a format string:
from enum import Enum
class LogType(Enum):
Info = "cmd: {}"
DBName = "cmd: Connected to database-({})"
# .. etc.
def log_info(logtype, info):
logger.info(logtype.value.format(info))
Related
In the example below. The init method of MyClass defined the attribute self._user has optionally the type of UserInput and is initialized as None. The actual user input should be provided by the method set_user. For some practical reason, the user input cannot be provided to the method __init__. After giving user input, other methods method_1 and method_2 can be called.
Question to professional Python programmers: do I really need to add assert ... not None in every method that uses self._user? Otherwise, VS Code Pylance type checking will complain that self._user might be None.
However, I tried the same code in PyCharm with its built-in type checking. This issue is not raised there.
And as professional Python programmers, do you prefer the Pylance type checking in VS Code, or the built-in type checking in PyCharm?
Thanks in advance.
class UserInput:
name: str
age: int
class MyClass:
def __init__(self):
self._user: UserInput | None = None
def set_user(self, user: UserInput): # This method should be called before calling any methods.
self._user = user
def method_1(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its age.
return self._user.age # Will get warning without the assert above.
def method_2(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its name.
I think it's safest and cleanest if you keep the asserts in. After all, it is up to the user of your class in which order he calls the instance methods. Therefore, you cannot guarantee that self._user is not None.
I think it's bad practice to use assert in production code. When things go wrong, you get lots of AssertionError, but you don't have any context about why that assertion is being made.
Instead I would catch the issue early, and not handle it later. If set_user() should be called earlier, I'd be tempted to put the user in the __init__ method, but the same principle applies.
#dataclass
class UserInput:
name: str
age: int
class NoUserException(TypeError):
pass
class MyClass:
# Or this could be in the __init__ method
def set_user(self, user: UserInput | None):
if not user:
raise NoUserException()
self._user: user
def method_1(self):
# do something using self._user, for example return its age.
return self._user.age
def method_2(self):
# do something using self._user, for example return its name.
return self._user.name
You already stated that set_user will be called first, so when that happens you'll get a NoUserException if the user is None.
But I'd be tempted to not even do that. If I were writing this, I'd have no NoneType checking in MyClass, and instead not call set_user if the user was None.
m = MyClass()
user = ...
if user:
m.set_user(user)
... # anything else with `m`
else:
# here is where you would get an error
I have two methods which take different number of arguments. Here are the two functions:
def jumpMX(self,IAS,list):
pass
def addMX(self,IAS):
pass
I am using a function which will return one of these functions to main.I have stored this returned function in a variable named operation.
Since the number of parameters are different for both,how do I identify which function has been returned?
if(operation == jumpMX):
operation(IAS,list)
elif(operation == addMX):
operation(IAS)
What is the syntax for this?Thanks in advance!
You can identify a function through its __name__ attribute:
def foo():
pass
print(foo.__name__)
>>> foo
...or in your case:
operation.__name__ #will return either "jumpMX" or "addMX" depending on what function is stored in operation
Here's a demo you can modify to your needs:
import random #used only for demo purposes
def jumpMX(self,IAS,list):
pass
def addMX(self,IAS):
pass
def FunctionThatWillReturnOneOrTheOtherOfTheTwoFunctionsAbove():
# This will randomly return either jumpMX()
# or addMX to simulate different scenarios
funcs = [jumpMX, addMX]
randomFunc = random.choice(funcs)
return randomFunc
operation = FunctionThatWillReturnOneOrTheOtherOfTheTwoFunctionsAbove()
name = operation.__name__
if(name == "jumpMX"):
operation(IAS,list)
elif(name == "addMX"):
operation(IAS)
You can import those functions and test for equality like with most objects in python.
classes.py
class MyClass:
#staticmethod
def jump(self, ias, _list):
pass
#staticmethod
def add(self, ias):
pass
main.py
from classes import MyClass
myclass_instance = MyClass()
operation = get_op() # your function that returns MyClass.jump or MyClass.add
if operation == MyClass.jump:
operation(myclass_instance, ias, _list)
elif operation == MyClass.add:
operation(myclass_instance, ias)
However, I must emphasize that I don't know what you're trying to accomplish and this seems like a terribly contrived way of doing something like this.
Also, your python code examples are not properly formatted. See the PEP-8 which proposes a standard style-guide for python.
Using {:typed_struct, "~> 0.2.1"} library I have following struct:
defmodule RequestParams do
use TypedStruct
typedstruct do
field :code, String.t()
field :name, String.t()
end
end
I am trying to pattern match function parameters to struct:
def do_handle(params %RequestParams{}, _context) do
# instead of
# def do_handle(%{ "code" => code, "name" => name}, _context) do
But I get exception:
cannot find or invoke local params/2 inside match. Only macros can be
invoked in a match and they must be defined before their invocation.
What is wrong? And is it possible at all to match function parameters to struct?
Cause of the issue
In elixir, parentheses in function calls are not mandatory (although desirable.) That said, the code def foo(bar baz) do ... is being parsed and treated as def foo(bar(baz)) do ... because the parser suggests omitted parentheses in call to the function bar. You should have got a warning from the compiler, saying exactly that. Warnings are supposed to be read and eliminated.
Quick fix
As it is pointed out by #peaceful-james, pattern matching inside parentheses would do.
def do_handle(%RequestParams{} = params, _context) do
“Instead of”
You wrote
def do_handle(params %RequestParams{}, _context) do
# instead of
# def do_handle(%{ "code" => code, "name" => name}, _context) do
Even if it was syntactically correct, the code above is not equivalent to the code below. The code below would accept any map, having two keys "code" and "name", while the code above allows instances of RequestParams only. One might also make the code below more strict with:
def do_handle(%RequestParams{code: code, name: name}, _) do
Struct keys
But structs in elixir cannot have anything but atom as a key. That said, if your initial code accepted %{"code" => _} there is no way to turn it into accepting a struct without modifying the calling code.
Typed stuff
Types are not the first-class citizens in elixir. I personally find it appropriate. You should start with understanding the language, OTP principles, the paradigm of the language and only after that decide whether you want and/or need types at all.
Do you actually mean to write this:
def do_handle(%RequestParams{}=params, _context) do
def fixed_given(self):
return #given(
test_df=data_frames(
columns=columns(
["float_col1"],
dtype=float,
),
rows=tuples(
floats(allow_nan=True, allow_infinity=True)),
)
)(self)
#pytest.fixture()
#fixed_given
def its_a_fixture(test_df):
obj = its_an_class(test_df)
return obj
#pytest.fixture()
#fixed_given
def test_1(test_df):
#use returned object from my fixture here
#pytest.fixture()
#fixed_given
def test_2(test_df):
#use returned object from my fixture here
Here, I am creating my test dataframe in a seperate function to use it commonly across all functions.
And then creating a pytest fixture to instantiate a class by passing the test dataframe generated by a fixed given function.
I am finding a way to get a return value from this fixture.
But the problem i am using a given decorator, its doesn't allow return values.
is there a way to return even after using given decorator?
It's not clear what you're trying to acheive here, but reusing inputs generated by Hypothsis gives up most of the power of the framework (including minimal examples, replaying failures, settings options, etc.).
Instead, you can define a global variable for your strategy - or write a function that returns a strategy with #st.composite - and use that in each of your tests, e.g.
MY_STRATEGY = data_frames(columns=[
column(name="float_col1", elements=floats(allow_nan=True, allow_infinity=True))
])
#given(MY_STRATEGY)
def test_foo(df): ...
#given(MY_STRATEGY)
def test_bar(df): ...
Specifically to answer the question you asked, you cannot get a return value from a function decorated with #given.
Instead of using fixtures to instantiate your class, try using the .map method of a strategy (in this case data_frames(...).map(its_an_class)), or the builds() strategy (i.e. builds(my_class, data_frames(...), ...)).
I want to pass a string to a method/class function which resolves the correct attribute to modify. I'm pretty sure i've done this before, but I seem to have forgotten how to.
class A:
def __init__(self):
self.word = B.getWord()
self.phrase = "Some default string"
def set_dynamically(self, attribute, value):
self[attribute] = value
This would let me do something like A.set_dynamically('word', C.getWord())
I've tried searching for a question and answer for this but I'm having a hard time defining what this is called, so I didn't really find anything.
Python objects have a built-in method called __setattr__(self, name, value) that does this. You can invoke this method by calling setattr() with an object as the argument:
A = A()
setattr(A, 'word', C.getWord())
There's no reason to do this when you could just do something like A.word = C.getWord() (which, in fact, resolves down to calling __setattr__() the same way as the built-in setattr() function does), but if the property you're setting is named dynamically, then this is how you get around that limitation.
If you want to customize the behavior of how your class acts when you try to call setattr() on it (or when you try to set an attribute normally), you can override the __setattr__(self, name, value) method in much the same way as you're overriding __init__(). Be careful if you do this, because it's really easy to accidentally produce an infinite recursion error - to avoid this you can use object.__setattr__(self, name_value) inside your overridden __setattr__(self, name, value).
Just wanted to add my own solution as well. I created a mapping object;
def _mapper(self, attr, object):
m = { "funcA" : object.funcA,
"funcB" : object.funcB,
... : ...,
}
return m.get(attr)
def call_specific_func_(self, attr):
--do stuff--
for a in some-list:
attr = a.get_attr()
retvals = self._mapper(attr, a)
-- etc --