Namedtuple Class Name - python-3.x

Just wondering that when we create a namedtuple object, why we always make the new object name (on the left) as the same as the namedtuple object name (on the right)?
I have tried an experiment to make them different names as below and then add in fields data using different objects name as below:
Running Code Example
>>> latlong = namedtuple('good', 'lat long')
>>> latlong(1, 2)
good(lat=1, long=2)
>>> good(1,2)
latlong(lat=1, long=2)
So when I add fields using latlong, it returns the good(), when I use good, it return latlong. Why they return different name each time? shouldn't them always return the the original object name which is good?
Thank You!
Erik

The name passed as first parameter is the one that gets stored inside the named tuple, as its __name__ attribute - and is used when you call repr on such a tuple. (You, or, in this case, the interactive interpreter - which makes a repr call to get you what you s printed).
The name you assign the namedtuple to - that is, the name on the left side of = is just any name you choose to use in your code - like you can name any other Python object.
You can even call a single namedtuple class a lot of names, by simply doing:
N = Name = NameAndAddress = namedtuple("NameAndAddress", "name address")
You can use the same object by using any of the assigned names. Or you can store a lot of namedtuples in a list, and give no explict name to any of them - just like any other object, what variable name holds a reference to an object makes no difference.
That is a bit unfortunate, because for better readability, we like to have the namedtuple "internal" name to be the same we use to reference it. Python, has no way of cleanly creating such an assignment - unless it is made inside a class body. In other words: there is no mechanism in the language that can make the expression on the right side of = to know which name it is being assigned to (on the left side of =) - so namedtuple signature requires you to explicitly type its name as a string. But despite that, there are no mechanisms or checks to verify if assigned and internal name are the same.
Just as a side comment: if you are assigning names inside a class body, there are mechanisms in Python that allow you to "tell" the assigned object its name (which will be a class attribute). Up to Python 3.5, you have to do that on the class's metaclass - or in a class decorator. In Python 3.6, the assigned object may be itself a descriptor, whose class implements a special method named __set_name__. If so, that method is called at class instantiation and passed the class object and the attribute name - the code in the method can then set the name in the instance.

Related

NamedTuple - сhecking types of fields at runtime

Is there a neat solution to raise an error if a value is passed to the NamedTuple field that does not match the declared type?
In this example, I intentionally passed page_count str instead of int. And the script will work on passing the erroneous value forward.
(I understand that linter will draw your attention to the error, but I encountered this in a case where NamedTuple fields were filled in by a function getting values from config file).
I could check the type of each value with a condition, but it doesn't look really clean. Any ideas? Thanks.
from typing import NamedTuple
class ParserParams(NamedTuple):
api_url: str
page_count: int
timeout: float
parser_params = ParserParams(
api_url='some_url',
page_count='3',
timeout=10.0,
)
By design, Python is a dynamically typed language which means any value can be assigned to any variable. Typing is only supported as hints - the errors might be highlighted in your IDE, but they do not enforce anything.
This means that if you need type checking you have to implement it yourself. On the upside, this can probably be automated, i.e. implemented only once instead of separately for every field. However, NamedTuple does not provide such checking out of the box.

Is there any way to define variables in classes without any default value?

I want the following behaviour:
class RealEstate:
rooms
Suddenly the following results in an Unresolved reference Error: NameError: name 'rooms' is not defined.
I am aware that I can assign None to the variable:
class RealEstate:
rooms = None
Or define the type:
class RealEstate:
rooms: float
Both will work. But it is not what I want. I want it as simple as possible and with less typing. Is there any way to make the first example work? Maybe some Metaclasses magic, some brilliant decorators, extending some special classes or libraries that can help?
No, you can't "define" a variable by writing its name alone. It will be evaluated as an expression and if it doesn't exist already it will raise a NameError.
You can either
"define" a variable by using an assignment name = value (it binds a name to a value at runtime, and you need to specify the value), or
"define" a variable by using a type annotation name: type (it associates a name with a type, for which you need to specify the type, but does not automatically create the name or any value at runtime).

How to modify immutable objects passed as **arguments in functions with Python3 the elegant way?

I am not sure what the problem is here, so I don't really know how I should call the subject for that question. Please offer a better subject if you know.
The code below is a extrem simplified example of the original one. But it reproduce the problem very nice. After the call of test() foo should be sieben.
I think I didn't know some special things about scopes of variables in Python. This might be a very good problem to learn more about that. But I don't know on which Python topic I should focus here to find a solution for my own.
#!/usr/bin/env python3
def test(handlerFunction, **handlerArgs):
handlerFunction(**handlerArgs)
def myhandler(dat):
print('dat={}'.format(dat))
dat = 'sieben'
print('dat={}'.format(dat))
foo = 'foo'
test(myhandler, dat=foo)
print('foo={}'.format(foo))
Of course I could make foo a global variable. But that is not the goal. The goal is to carry this variable inside and through sub-functions of different levels and bring the result back. In the original code I use some more complexe data structures with **handlerArgs.
A solution could be to use a list() as an mutable object holding the immutable one. But is this really elegant or pythonic?
#!/usr/bin/env python3
def test(handlerFunction, **handlerArgs):
handlerFunction(**handlerArgs)
def myhandler(dat):
print('dat={}'.format(dat))
# MODIFIED LINE
dat[0] = 'sieben'
print('dat={}'.format(dat))
# MODIFIED LINE
foo = ['foo']
test(myhandler, dat=foo)
print('foo={}'.format(foo))
The ** syntax has nothing to do with this. dat is local to myhandler, and assigning it doesn't change the global variable with the same name. If you want to change the module variable from inside the function, declare the variable as global at the beginning of the function body:
def myhandler(): # you don't need to pass dat as argument
global dat
print('dat={}'.format(dat))
dat = 'sieben'
print('dat={}'.format(dat))
Here's a relevant portion from the docs:
If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations.
If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, i.e. the namespace of the module containing the code block, and the builtins namespace, the namespace of the module builtins. The global namespace is searched first. If the name is not found there, the builtins namespace is searched. The global statement must precede all uses of the name.
After your edit the question reads as: "how do I mutate an immutable object?"
Well, I think you've guessed it: you don't. Using a mutable object in this manner seems reasonable to me.

Iterating an object's attributes using dir and getattr rigorously

I am trying to programmatically access all fields of some Python3 object using a combination of dir and getattr. Pseudocode below:
x = some_object()
for i in dir(x):
print(str(getattr(x, i)))
However, the Python docs (https://docs.python.org/2/library/functions.html#dir) on dir is VERY vague:
Note Because dir() is supplied primarily as a convenience for use at an interactive prompt, it tries to supply an interesting set of names more than it tries to supply a rigorously or consistently defined set of names, and its detailed behavior may change across releases.
My questions:
1) Is there a way to achieve the above more rigorously than using dir?
2) What does "interesting set of names" mean and how is it computed?
Kind of?
With custom __getattr__ and __getattribute__ implementations, an object can dynamically respond to requests for any attribute. You could have an object that has every attribute, or an object that randomly has or doesn't have the foo attribute with 50% probability every time you look at it. If you know how an object's __getattr__ and __getattribute__ work, and you know that they respond to a finite list of attribute names, then you can write your own version of dir that lists everything, but even handling the basic built-in types requires an uncomfortable number of cases. You can see the dir implementation in Objects/object.c.
The documentation gives an explanation of which attributes are considered interesting:
If the object is a module object, the list contains the names of the module’s attributes.
If the object is a type or class object, the list contains the names of its attributes, and recursively of the attributes of its bases.
Otherwise, the list contains the object’s attributes’ names, the names of its class’s attributes, and recursively of the attributes of its class’s base classes.
By "attributes", this documentation is mostly referring to the keys of its __dict__. There's also some handling for __members__ and __methods__; those are deprecated, and I don't remember what they do.

python: manipulating __dict__ of the class

(All in ActivePython 3.1.2)
I tried to change the class (rather than instance) attributes. The __dict__ of the metaclass seemed like the perfect solution. But when I tried to modify, I got:
TypeError: 'dict_proxy' object does
not support item assignment
Why, and what can I do about it?
EDIT
I'm adding attributes inside the class definition.
setattr doesn't work because the class is not yet built, and hence I can't refer to it yet (or at least I don't know how).
The traditional assignment doesn't work because I'm adding a large number of attributes, whose names are determined by a certain rule (so I can't just type them out).
In other words, suppose I want class A to have attributes A.a001 through A.a999; and all of them have to be defined before it's fully built (since otherwise SQLAlchemy won't instrument it properly).
Note also that I made a typo in the original title: it's __dict__ of a regular class, not a metaclass, that I wanted to modify.
The creation of a large number of attributes following some rule smells like something is seriously wrong. I'd go back and see if there isn't a better way of doing that.
Having said there here is "Evil Code" (but it'll work, I think)
class A:
locals()['alpha'] = 1
print A.alpha
This works because while the class is being defined there is a dictionary that tracks the local variables you are definining. These local variables eventually become the class attributes. Be careful with locals as it won't necessarily act "correctly." You aren't really supposed to be modifying locals, but it does seem to work when I tried it.
Instead of using the declarative syntax, build the table seperately and then use mapper on it. see http://www.sqlalchemy.org/docs/05/ormtutorial.html# I think there is just no good way to add computed attributes to class while defining it.
Alternatively, I don't know whether this will work but:
class A(object):
pass
A.all_my_attributes = values
class B(declarative_base, A):
pass
might possibly work.
I'm not too familiar with how 3 treats dict but you might be able to circumvent this problem by simply inheriting the dictionary class like so:
class A(dict):
def __init__(self,dict_of_args):
self['key'] = 'myvalue'
self.update(dict_of_args)
# whatever else you need to do goes here...
A() can be referenced like so:
d = {1:2,3:4}
obj = A(mydict)
print obj['test'],obj[3] # this will print myvalue and 4
Hope this helps.

Resources