I am a newer to Nim programing language. when I use Python, I can use the 'pass' to skip the defination detail of a function and class.
def foo():
pass # skip detail
class Bar():
pass
Is there are something like this in Nim?
discard does the same in Nim:
proc foo =
discard # skip detail
type Bar = object
For objects it's not even necessary, but possible, to specify discard
Related
Is there a nice way to control the hash of a method of a Python class.
Let's say I have the following example:
class A:
def hello(self, arg):
print(arg)
def __hash__(self):
return 12345
a = A()
b = A()
hash(a.hello) == hash(b.hello) >>> False
Now I'm vaguely aware of why that is. Internally methods are functions that have a class reference with some magic to it but basically, they are just functions that (probably) inherit from object. So the __hash__ method of class A is only relevant to its own instances.
However, while this makes sense at first, I did realize that in Python 3.7 the example above evaluates to True, while in 3.8 it is False.
Does anyone: (1) know how to achieve this behavior in > 3.7 (thus controlling the hash of a method), and, (2) know why and what changed between the two versions (I am starting to doubt my sanity tbh)?
In the example below. The init method of MyClass defined the attribute self._user has optionally the type of UserInput and is initialized as None. The actual user input should be provided by the method set_user. For some practical reason, the user input cannot be provided to the method __init__. After giving user input, other methods method_1 and method_2 can be called.
Question to professional Python programmers: do I really need to add assert ... not None in every method that uses self._user? Otherwise, VS Code Pylance type checking will complain that self._user might be None.
However, I tried the same code in PyCharm with its built-in type checking. This issue is not raised there.
And as professional Python programmers, do you prefer the Pylance type checking in VS Code, or the built-in type checking in PyCharm?
Thanks in advance.
class UserInput:
name: str
age: int
class MyClass:
def __init__(self):
self._user: UserInput | None = None
def set_user(self, user: UserInput): # This method should be called before calling any methods.
self._user = user
def method_1(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its age.
return self._user.age # Will get warning without the assert above.
def method_2(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its name.
I think it's safest and cleanest if you keep the asserts in. After all, it is up to the user of your class in which order he calls the instance methods. Therefore, you cannot guarantee that self._user is not None.
I think it's bad practice to use assert in production code. When things go wrong, you get lots of AssertionError, but you don't have any context about why that assertion is being made.
Instead I would catch the issue early, and not handle it later. If set_user() should be called earlier, I'd be tempted to put the user in the __init__ method, but the same principle applies.
#dataclass
class UserInput:
name: str
age: int
class NoUserException(TypeError):
pass
class MyClass:
# Or this could be in the __init__ method
def set_user(self, user: UserInput | None):
if not user:
raise NoUserException()
self._user: user
def method_1(self):
# do something using self._user, for example return its age.
return self._user.age
def method_2(self):
# do something using self._user, for example return its name.
return self._user.name
You already stated that set_user will be called first, so when that happens you'll get a NoUserException if the user is None.
But I'd be tempted to not even do that. If I were writing this, I'd have no NoneType checking in MyClass, and instead not call set_user if the user was None.
m = MyClass()
user = ...
if user:
m.set_user(user)
... # anything else with `m`
else:
# here is where you would get an error
How can I cast a var into a CustomClass?
In Python, I can use float(var), int(var) and str(var) to cast a variable into primitive data types but I can't use CustomClass(var) to cast a variable into a CustomClass unless I have a constructor for that variable type.
Example with inheritance.
class CustomBase:
pass
class CustomClass(CustomBase):
def foo():
pass
def bar(var: CustomBase):
if isinstance(var, CustomClass):
# customClass = CustomClass(var) <-- Would like to cast here...
# customClass.foo() <-- to make it clear that I can call foo here.
In the process of writing this question I believe I've found a solution.
Python is using Duck-typing
Therefore it is not necessary to cast before calling a function.
Ie. the following is functionally fine.
def bar(var):
if isinstance(var, CustomClass):
customClass.foo()
I actually wanted static type casting on variables
I want this so that I can continue to get all the lovely benefits of the typing PEP in my IDE such as checking function input types, warnings for non-existant class methods, autocompleting methods, etc.
For this I believe re-typing (not sure if this is the correct term) is a suitable solution:
class CustomBase:
pass
class CustomClass(CustomBase):
def foo():
pass
def bar(var: CustomBase):
if isinstance(var, CustomClass):
customClass: CustomClass = var
customClass.foo() # Now my IDE doesn't report this method call as a warning.
In Python there is no switch/case. It is suggested to use dictionaries: What is the Python equivalent for a case/switch statement?
in Python it is good practise to use #property to implement getter/setter: What's the pythonic way to use getters and setters?
So, if I want to build a class with a list of properties to switch so I can get or update values, I can use something like:
class Obj():
"""property demo"""
#property
def uno(self):
return self._uno
#uno.setter
def uno(self, val):
self._uno = val*10
#property
def options(self):
return dict(vars(self))
But calling
o=Obj()
o.uno=10 # o.uno is now 100
o.options
I obtain {'_uno': 100} and not {'uno': 100}.
Am I missing something?
vars is really a tool for introspection, and gives you the local variables of the current space, or in a given object - it is not a good way to get attributes and variables ready for final consumption.
So, your options code must be a bit more sophisticated - one way to go
is to search the class for any properties, and then using getattr to get
the values of those properties, but using the getter code, and
introspect the instance variables, to get any methods attributed directly,
but discard the ones starting with _:
#property
def options(self):
results = {}
# search in all class attributes for properties, including superclasses:
for name in dir(self.__class__):
# obtain the object taht is associated with this name in the class
attr = getattr(self.__class__, name)
if isinstance(attr, property):
# ^ if you want to also retrieve other "property like"
# attributes, it is better to check if it as the `__get__` method and is not callable:
# "if hasattr(attr, '__get__') and not callable(attr):"
# retrieves the attribute - ensuring the getter code is run:
value = getattr(self, name)
results[name] = value
# check for the attributes assigned directly to the instance:
for name, value in self.__dict__.items():
# ^ here, vars(self) could have been used instead of self.__dict__
if not name.startswith("_"):
results[name] = value
return results
about switch..case
On a side note to your question, regarding the "switch...case" construction: please disregard all content you read saying "in Python one should use dictionaries instead of switch/case". This is incorrect.
The correct construct to replace "switch...case" in Python is the "if..elif..else". You can have all the expressiveness one does have with a C-like "switch" with a plain "if-else" tree in Python, and actually, go much beyond that, as the testing expression in if...elif can be arbitrary, and not just a matching value.
option = get_some_user_option()
if option == "A":
...
elif option == "B":
...
elif option in ("C", "D", "E"):
# common code for C, D, E
...
if option == "E":
# specialized code for "E",
else:
# option does not exist.
...
While it is possible to use a dictionary as a call table, and having functions to perform actions in the dictionary values, this construct is obviously not a "drop in" replacement for a plain switch case - starting from the point that the "case" functions can't be written inline in the dictionary, unless they can be written as a lambda function, and mainly
the point that they won't have direct access to the variables on the function calling them.
Having a background in Java, which is very verbose and strict, I find the ability to mutate Python objects as to give them with fields other than those presented to the constructor really "ugly".
Trying to accustom myself to a Pythonic way of thinking, I'm wondering how I should allow my objects to be constructed.
My instinct is to have to pass the fields at construction time, such as:
def __init__(self, foo, bar, baz=None):
self.foo = foo
self.bar = bar
self.baz = baz
But that can become overly verbose and confusing with many fields to pass. To overcome this I assume the best method is to pass one dictionary to the constructor, from which the fields are extracted:
def __init__(self, field_map):
self.foo = field_map["foo"]
self.bar = field_map["bar"]
self.baz = field_map["baz"] if baz in field_map else None
The other mechanism I can think of is to have the fields added elsewhere, such as:
class Blah(object):
def __init__(self):
pass
...
blah = Blah()
blah.foo = var1
But as that feels way too loose for me.
(I suppose the issue in my head is how I deal with interfaces in Python...)
So, to reiterate the question: How I should construct my objects in Python? Is there an accepted convention?
The first you describe is very common. Some use the shorter
class Foo:
def __init__(self, foo, bar):
self.foo, self.bar = foo, bar
Your second approach isn't common, but a similar version is this:
class Thing:
def __init__(self, **kwargs):
self.something = kwargs['something']
#..
which allows to create objects like
t = Thing(something=1)
This can be further modified to
class Thing:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
allowing
t = Thing(a=1, b=2, c=3)
print t.a, t.b, t.c # prints 1, 2, 3
As Debilski points out in the comments, the last method is a bit unsafe, you can add a list of accepted parameters like this:
class Thing:
keywords = 'foo', 'bar', 'snafu', 'fnord'
def __init__(self, **kwargs):
for kw in self.keywords:
setattr(self, kw, kwargs[kw])
There are many variations, there is no common standard that I am aware of.
I’ve not seen many of your field_maps in real life. I think that would only make sense if you were to use the field_map at some other place in your code as well.
Concerning your third example: Even though you don’t need to assign to them (other than None), it is common practice to explicitly declare attributes in the __init__ method, so you’ll easily see what properties your object has.
So the following is better than simply having an empty __init__ method (you’ll also get a higher pylint score for that):
class Blah(object):
def __init__(self):
self.foo = None
self.bar = None
blah = Blah()
blah.foo = var1
The problem with this approach is, that your object might be in a not well-defined state after initialisation, because you have not yet defined all of your object’s properties. This depends on your object’s logic (logic in code and in meaning) and how your object works. If it is the case however, I’d advise you not to do it this way. If your object relies on foo and bar to be meaningfully defined, you should really put them inside of your __init__ method.
If, however, the properties foo and bar are not mandatory, you’re free to define them afterwards.
If readability of the argument lists is an issue for you: Use keyword arguments.