Suppose I have code like:
x = 0
y = 1
z = 2
my_list = [x, y, z]
for item in my_list:
print("handling object ", name(item)) # <--- what would go instead of `name`?
How can I get the name of each object in Python? That is to say: what could I write instead of name in this code, so that the loop will show handling object x and then handling object y and handling object z?
In my actual code, I have a dict of functions that I will call later after looking them up with user input:
def fun1():
pass
def fun2():
pass
def fun3():
pass
fun_dict = {'fun1': fun1,
'fun2': fun2,
'fun3': fun3}
# suppose that we get the name 'fun3' from the user
fun_dict['fun3']()
How can I create fun_dict automatically, without writing the names of the functions twice? I would like to be able to write something like
fun_list = [fun1, fun2, fun3] # and I'll add more as the need arises
fun_dict = {}
for t in fun_list:
fun_dict[name(t)] = t
to avoid duplicating the names.
Objects do not necessarily have names in Python, so you can't get the name.
When you create a variable, like the x, y, z above then those names just act as "pointers" or "references" to the objects. The object itself does not know what name(s) you are using for it, and you can not easily (if at all) get the names of all references to that object.
However, it's not unusual for objects to have a __name__ attribute. Functions do have a __name__ (unless they are lambdas), so we can build fun_dict by doing e.g.
fun_dict = {t.__name__: t for t in fun_list)
That's not really possible, as there could be multiple variables that have the same value, or a value might have no variable, or a value might have the same value as a variable only by chance.
If you really want to do that, you can use
def variable_for_value(value):
for n,v in globals().items():
if v == value:
return n
return None
However, it would be better if you would iterate over names in the first place:
my_list = ["x", "y", "z"] # x, y, z have been previously defined
for name in my_list:
print "handling variable ", name
bla = globals()[name]
# do something to bla
This one-liner works, for all types of objects, as long as they are in globals() dict, which they should be:
def name_of_global_obj(xx):
return [objname for objname, oid in globals().items()
if id(oid)==id(xx)][0]
or, equivalently:
def name_of_global_obj(xx):
for objname, oid in globals().items():
if oid is xx:
return objname
As others have mentioned, this is a really tricky question. Solutions to this are not "one size fits all", not even remotely. The difficulty (or ease) is really going to depend on your situation.
I have come to this problem on several occasions, but most recently while creating a debugging function. I wanted the function to take some unknown objects as arguments and print their declared names and contents. Getting the contents is easy of course, but the declared name is another story.
What follows is some of what I have come up with.
Return function name
Determining the name of a function is really easy as it has the __name__ attribute containing the function's declared name.
name_of_function = lambda x : x.__name__
def name_of_function(arg):
try:
return arg.__name__
except AttributeError:
pass`
Just as an example, if you create the function def test_function(): pass, then copy_function = test_function, then name_of_function(copy_function), it will return test_function.
Return first matching object name
Check whether the object has a __name__ attribute and return it if so (declared functions only). Note that you may remove this test as the name will still be in globals().
Compare the value of arg with the values of items in globals() and return the name of the first match. Note that I am filtering out names starting with '_'.
The result will consist of the name of the first matching object otherwise None.
def name_of_object(arg):
# check __name__ attribute (functions)
try:
return arg.__name__
except AttributeError:
pass
for name, value in globals().items():
if value is arg and not name.startswith('_'):
return name
Return all matching object names
Compare the value of arg with the values of items in globals() and store names in a list. Note that I am filtering out names starting with '_'.
The result will consist of a list (for multiple matches), a string (for a single match), otherwise None. Of course you should adjust this behavior as needed.
def names_of_object(arg):
results = [n for n, v in globals().items() if v is arg and not n.startswith('_')]
return results[0] if len(results) is 1 else results if results else None
If you are looking to get the names of functions or lambdas or other function-like objects that are defined in the interpreter, you can use dill.source.getname from dill. It pretty much looks for the __name__ method, but in certain cases it knows other magic for how to find the name... or a name for the object. I don't want to get into an argument about finding the one true name for a python object, whatever that means.
>>> from dill.source import getname
>>>
>>> def add(x,y):
... return x+y
...
>>> squared = lambda x:x**2
>>>
>>> print getname(add)
'add'
>>> print getname(squared)
'squared'
>>>
>>> class Foo(object):
... def bar(self, x):
... return x*x+x
...
>>> f = Foo()
>>>
>>> print getname(f.bar)
'bar'
>>>
>>> woohoo = squared
>>> plus = add
>>> getname(woohoo)
'squared'
>>> getname(plus)
'add'
Use a reverse dict.
fun_dict = {'fun1': fun1,
'fun2': fun2,
'fun3': fun3}
r_dict = dict(zip(fun_dict.values(), fun_dict.keys()))
The reverse dict will map each function reference to the exact name you gave it in fun_dict, which may or may not be the name you used when you defined the function. And, this technique generalizes to other objects, including integers.
For extra fun and insanity, you can store the forward and reverse values in the same dict. I wouldn't do that if you were mapping strings to strings, but if you are doing something like function references and strings, it's not too crazy.
Note that while, as noted, objects in general do not and cannot know what variables are bound to them, functions defined with def do have names in the __name__ attribute (the name used in def). Also if the functions are defined in the same module (as in your example) then globals() will contain a superset of the dictionary you want.
def fun1:
pass
def fun2:
pass
def fun3:
pass
fun_dict = {}
for f in [fun1, fun2, fun3]:
fun_dict[f.__name__] = f
Here's another way to think about it. Suppose there were a name() function that returned the name of its argument. Given the following code:
def f(a):
return a
b = "x"
c = b
d = f(c)
e = [f(b), f(c), f(d)]
What should name(e[2]) return, and why?
And the reason I want to have the name of the function is because I want to create fun_dict without writing the names of the functions twice, since that seems like a good way to create bugs.
For this purpose you have a wonderful getattr function, that allows you to get an object by known name. So you could do for example:
funcs.py:
def func1(): pass
def func2(): pass
main.py:
import funcs
option = command_line_option()
getattr(funcs, option)()
I know This is late answer.
To get func name , you can use func.__name__
To get the name of any python object that has no name or __name__ method. You can iterate over its module members.
Ex:.
# package.module1.py
obj = MyClass()
# package.module2.py
import importlib
def get_obj_name(obj):
mod = Obj.__module__ # This is necessary to
module = module = importlib.import_module(mod)
for name, o in module.__dict__.items():
if o == obj:
return name
Performance note: don't use it in large modules.
Variable names can be found in the globals() and locals() dicts. But they won't give you what you're looking for above. "bla" will contain the value of each item of my_list, not the variable.
Generally when you are wanting to do something like this, you create a class to hold all of these functions and name them with some clear prefix cmd_ or the like. You then take the string from the command, and try to get that attribute from the class with the cmd_ prefixed to it. Now you only need to add a new function/method to the class, and it's available to your callers. And you can use the doc strings for automatically creating the help text.
As described in other answers, you may be able to do the same approach with globals() and regular functions in your module to more closely match what you asked for.
Something like this:
class Tasks:
def cmd_doit(self):
# do it here
func_name = parse_commandline()
try:
func = getattr('cmd_' + func_name, Tasks())
except AttributeError:
# bad command: exit or whatever
func()
I ran into this page while wondering the same question.
As others have noted, it's simple enough to just grab the __name__ attribute from a function in order to determine the name of the function. It's marginally trickier with objects that don't have a sane way to determine __name__, i.e. base/primitive objects like basestring instances, ints, longs, etc.
Long story short, you could probably use the inspect module to make an educated guess about which one it is, but you would have to probably know what frame you're working in/traverse down the stack to find the right one. But I'd hate to imagine how much fun this would be trying to deal with eval/exec'ed code.
% python2 whats_my_name_again.py
needle => ''b''
['a', 'b']
[]
needle => '<function foo at 0x289d08ec>'
['c']
['foo']
needle => '<function bar at 0x289d0bfc>'
['f', 'bar']
[]
needle => '<__main__.a_class instance at 0x289d3aac>'
['e', 'd']
[]
needle => '<function bar at 0x289d0bfc>'
['f', 'bar']
[]
%
whats_my_name_again.py:
#!/usr/bin/env python
import inspect
class a_class:
def __init__(self):
pass
def foo():
def bar():
pass
a = 'b'
b = 'b'
c = foo
d = a_class()
e = d
f = bar
#print('globals', inspect.stack()[0][0].f_globals)
#print('locals', inspect.stack()[0][0].f_locals)
assert(inspect.stack()[0][0].f_globals == globals())
assert(inspect.stack()[0][0].f_locals == locals())
in_a_haystack = lambda: value == needle and key != 'needle'
for needle in (a, foo, bar, d, f, ):
print("needle => '%r'" % (needle, ))
print([key for key, value in locals().iteritems() if in_a_haystack()])
print([key for key, value in globals().iteritems() if in_a_haystack()])
foo()
You define a class and add the Unicode private function insert the class like
class example:
def __init__(self, name):
self.name = name
def __unicode__(self):
return self.name
Of course you have to add extra variable self.name which is the name of the object.
Here is my answer, I am also using globals().items()
def get_name_of_obj(obj, except_word = ""):
for name, item in globals().items():
if item == obj and name != except_word:
return name
I added except_word because I want to filter off some word used in for loop.
If you didn't add it, the keyword in for loop may confuse this function, sometimes the keyword like "each_item" in the following case may show in the function's result, depends on what you have done to your loop.
eg.
for each_item in [objA, objB, objC]:
get_name_of_obj(obj, "each_item")
eg.
>>> objA = [1, 2, 3]
>>> objB = ('a', {'b':'thi is B'}, 'c')
>>> for each_item in [objA, objB]:
... get_name_of_obj(each_item)
...
'objA'
'objB'
>>>
>>>
>>> for each_item in [objA, objB]:
... get_name_of_obj(each_item)
...
'objA'
'objB'
>>>
>>>
>>> objC = [{'a1':'a2'}]
>>>
>>> for item in [objA, objB, objC]:
... get_name_of_obj(item)
...
'objA'
'item' <<<<<<<<<< --------- this is no good
'item'
>>> for item in [objA, objB]:
... get_name_of_obj(item)
...
'objA'
'item' <<<<<<<<--------this is no good
>>>
>>> for item in [objA, objB, objC]:
... get_name_of_obj(item, "item")
...
'objA'
'objB' <<<<<<<<<<--------- now it's ok
'objC'
>>>
Hope this can help.
Based on what it looks like you're trying to do you could use this approach.
In your case, your functions would all live in the module foo. Then you could:
import foo
func_name = parse_commandline()
method_to_call = getattr(foo, func_name)
result = method_to_call()
Or more succinctly:
import foo
result = getattr(foo, parse_commandline())()
Python has names which are mapped to objects in a hashmap called a namespace. At any instant in time, a name always refers to exactly one object, but a single object can be referred to by any arbitrary number of names. Given a name, it is very efficient for the hashmap to look up the single object which that name refers to. However given an object, which as mentioned can be referred to by multiple names, there is no efficient way to look up the names which refer to it. What you have to do is iterate through all the names in the namespace and check each one individually and see if it maps to your given object. This can easily be done with a list comprehension:
[k for k,v in locals().items() if v is myobj]
This will evaluate to a list of strings containing the names of all local "variables" which are currently mapped to the object myobj.
>>> a = 1
>>> this_is_also_a = a
>>> this_is_a = a
>>> b = "ligma"
>>> c = [2,3, 534]
>>> [k for k,v in locals().items() if v is a]
['a', 'this_is_also_a', 'this_is_a']
Of course locals() can be substituted with any dict that you want to search for names that point to a given object. Obviously this search can be slow for very large namespaces because they must be traversed in their entirety.
Hi there is one way to get the variable name that stores an instance of a class
is to use
locals()
function, it returns a dictionary that contains the variable name as a string and its value
Related
this code works well when I try to change dictionary in closure
def a():
x = {'a':'b'}
def b():
x['a'] = 'd'
print(x)
return b
>>> b = a()
>>> b()
{'a':'d'}
output is meeting my expectation. but why the code below doesn't work?
def m():
x = 1
def n():
x += 1
print(x)
return n
>>> n = m()
>>> n()
UnboundLocalError: local variable 'x' referenced before assignment
Honestly, I've known that we can use nonlocal x statement to solve this problem
But can anybody explain the reason more deeply for me? what the difference between a dictionary and a number
Thanks!
Python has a great FAQ specifically on this.
In short, when you modify a dictionary, or any mutable object, you modify the same object, so you don't re-assign the variable. In case of an integer, since it's immutable, by doing a += you create a new object and put it into x. Since x is now defined inside the inner function, but you're trying to bring data from the outer function, you have an issue.
You can check if it's the same object using id().
def change(a):
a=4
print('1:')
c=3
print('Value before changing',c)
change(c)
print('Value after changing',c)
print('2:')
d=6
print('Value before changing',d)
change(d)
print('Value after changing',d)
print('3:')
e=7
print('Value before changing',e)
change(e)
print('Value after changing',e)
I want to change n distinct global variables. Eg: I want to change c,d and e global variables using function by passing it as a argument. How can I do so?
Edit:
My original answer wouldn't have worked. So here's my new answer. First, you'll need a function to get the name of the variable. This can be done with the builtin inspect package like so,
import inspect
def retrieve_name(var):
callers_local_vars = inspect.currentframe().f_back.f_locals.items()
return [var_name for var_name, var_val in callers_local_vars if var_val is var]
Then, you'll need to rewrite your change function to
def change(a):
globals()[a] = 4
And use it in conjunction with the retrieve_name function like so,
change(retrieve_name(x)[0])
Because if you just put the retrieve_name inside change it will always return a.
Below is my original answer:
Tell the function change that a is global. Eg:
def change(a):
global a
a = 4
the global keyword tells the function that there already exists a variable of this name, defined outside the current scope. It is in, what python calls, the global scope (think the outter-most scope of a python file).
>>> def change(a, value=4):
global a
a = value
>>> x = 3
>>> change(x)
# x = 4
I've updated my previous answer so that it should work, but here is an alternative method.
def change(**kwargs):
for name in kwargs:
globals()[name] = 4
x = 3
change(x=x) # whatever follows the '=' sign is redundant in this case
Or, you could do
def change(**kwargs):
globals().update(kwargs)
x = 3
change(x=4) # the global value of 'x' is now 4
How do you override the result of unpacking syntax *obj and **obj?
For example, can you somehow create an object thing which behaves like this:
>>> [*thing]
['a', 'b', 'c']
>>> [x for x in thing]
['d', 'e', 'f']
>>> {**thing}
{'hello world': 'I am a potato!!'}
Note: the iteration via __iter__ ("for x in thing") returns different elements from the *splat unpack.
I had a look inoperator.mul and operator.pow, but those functions only concern usages with two operands, like a*b and a**b, and seem unrelated to splat operations.
* iterates over an object and uses its elements as arguments. ** iterates over an object's keys and uses __getitem__ (equivalent to bracket notation) to fetch key-value pairs. To customize *, simply make your object iterable, and to customize **, make your object a mapping:
class MyIterable(object):
def __iter__(self):
return iter([1, 2, 3])
class MyMapping(collections.Mapping):
def __iter__(self):
return iter('123')
def __getitem__(self, item):
return int(item)
def __len__(self):
return 3
If you want * and ** to do something besides what's described above, you can't. I don't have a documentation reference for that statement (since it's easier to find documentation for "you can do this" than "you can't do this"), but I have a source quote. The bytecode interpreter loop in PyEval_EvalFrameEx calls ext_do_call to implement function calls with * or ** arguments. ext_do_call contains the following code:
if (!PyDict_Check(kwdict)) {
PyObject *d;
d = PyDict_New();
if (d == NULL)
goto ext_call_fail;
if (PyDict_Update(d, kwdict) != 0) {
which, if the ** argument is not a dict, creates a dict and performs an ordinary update to initialize it from the keyword arguments (except that PyDict_Update won't accept a list of key-value pairs). Thus, you can't customize ** separately from implementing the mapping protocol.
Similarly, for * arguments, ext_do_call performs
if (!PyTuple_Check(stararg)) {
PyObject *t = NULL;
t = PySequence_Tuple(stararg);
which is equivalent to tuple(args). Thus, you can't customize * separately from ordinary iteration.
It'd be horribly confusing if f(*thing) and f(*iter(thing)) did different things. In any case, * and ** are part of the function call syntax, not separate operators, so customizing them (if possible) would be the callable's job, not the argument's. I suppose there could be use cases for allowing the callable to customize them, perhaps to pass dict subclasses like defaultdict through...
I did succeed in making an object that behaves how I described in my question, but I really had to cheat. So just posting this here for fun, really -
class Thing:
def __init__(self):
self.mode = 'abc'
def __iter__(self):
if self.mode == 'abc':
yield 'a'
yield 'b'
yield 'c'
self.mode = 'def'
else:
yield 'd'
yield 'e'
yield 'f'
self.mode = 'abc'
def __getitem__(self, item):
return 'I am a potato!!'
def keys(self):
return ['hello world']
The iterator protocol is satisfied by a generator object returned from __iter__ (note that a Thing() instance itself is not an iterator, though it is iterable). The mapping protocol is satisfied by the presence of keys() and __getitem__. Yet, in case it wasn't already obvious, you can't call *thing twice in a row and have it unpack a,b,c twice in a row - so it's not really overriding splat like it pretends to be doing.
In Java, if I call List.toString(), it will automatically call the toString() method on each object inside the List. For example, if my list contains objects o1, o2, and o3, list.toString() would look something like this:
"[" + o1.toString() + ", " + o2.toString() + ", " + o3.toString() + "]"
Is there a way to get similar behavior in Python? I implemented a __str__() method in my class, but when I print out a list of objects, using:
print 'my list is %s'%(list)
it looks something like this:
[<__main__.cell instance at 0x2a955e95f0>, <__main__.cell instance at 0x2a955e9638>, <__main__.cell instance at 0x2a955e9680>]
how can I get python to call my __str__() automatically for each element inside the list (or dict for that matter)?
Calling string on a python list calls the __repr__ method on each element inside. For some items, __str__ and __repr__ are the same. If you want that behavior, do:
def __str__(self):
...
def __repr__(self):
return self.__str__()
You can use a list comprehension to generate a new list with each item str()'d automatically:
print([str(item) for item in mylist])
Two easy things you can do, use the map function or use a comprehension.
But that gets you a list of strings, not a string. So you also have to join the strings together.
s= ",".join( map( str, myList ) )
or
s= ",".join( [ str(element) for element in myList ] )
Then you can print this composite string object.
print 'my list is %s'%( s )
Depending on what you want to use that output for, perhaps __repr__ might be more appropriate:
import unittest
class A(object):
def __init__(self, val):
self.val = val
def __repr__(self):
return repr(self.val)
class Test(unittest.TestCase):
def testMain(self):
l = [A('a'), A('b')]
self.assertEqual(repr(l), "['a', 'b']")
if __name__ == '__main__':
unittest.main()
I agree with the previous answer about using list comprehensions to do this, but you could certainly hide that behind a function, if that's what floats your boat.
def is_list(value):
if type(value) in (list, tuple): return True
return False
def list_str(value):
if not is_list(value): return str(value)
return [list_str(v) for v in value]
Just for fun, I made list_str() recursively str() everything contained in the list.
Something like this?
a = [1, 2 ,3]
[str(x) for x in a]
# ['1', '2', '3']
This should suffice.
When printing lists as well as other container classes, the contained elements will be printed using __repr__, because __repr__ is meant to be used for internal object representation.
If we call: help(object.__repr__) it will tell us:
Help on wrapper_descriptor:
__repr__(self, /)
Return repr(self).
And if we call help(repr) it will output:
Help on built-in function repr in module builtins:
repr(obj, /)
Return the canonical string representation of the object.
For many object types, including most builtins, eval(repr(obj)) == obj.
If __str__ is implemented for an object and __repr__ is not repr(obj) will output the default output, just like print(obj) when non of these are implemented.
So the only way is to implement __repr__ for your class. One possible way to do that is this:
class C:
def __str__(self):
return str(f"{self.__class__.__name__} class str ")
C.__repr__=C.__str__
ci = C()
print(ci) #C class str
print(str(ci)) #C class str
print(repr(ci)) #C class str
The output you're getting is just the object's module name, class name, and then the memory address in hexadecimal as the the __repr__ function is not overridden.
__str__ is used for the string representation of an object when using print. But since you are printing a list of objects, and not iterating over the list to call the str method for each item it prints out the objects representation.
To have the __str__ function invoked you'd need to do something like this:
'my list is %s' % [str(x) for x in myList]
If you override the __repr__ function you can use the print method like you were before:
class cell:
def __init__(self, id):
self.id = id
def __str__(self):
return str(self.id) # Or whatever
def __repr__(self):
return str(self) # function invoked when you try and print the whole list.
myList = [cell(1), cell(2), cell(3)]
'my list is %s' % myList
Then you'll get "my list is [1, 2, 3]" as your output.
Why when I create a class with a list, the list and its contents become global
class A:
my_list = []
string = ""
def add(self, data):
self.string += "a"
self.my_list.append(data)
def print_list(self):
print(self.string)
print(self.my_list)
a = A()
b = A()
a.add("test")
a.print_list()
b.print_list()
Both a and b will print the list that was created by a.add
# results of a.print_list
xa
['test']
# results of b.print_list
x
['test']
So my question is, is this normal for python3, or a bug.
Doesnt seem right to me that only the list is modified globally.
This is the explaining:
Objects have individuality, and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python, and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python code involving mutable objects such as lists, dictionaries, and most other types. This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change — this eliminates the need for two different argument passing mechanisms as in Pascal.
From oficial Python 3 docs
And this is the solution:
(...) use an instance variable instead (...)
Like this:
class Dog:
def __init__(self, name):
self.name = name
self.tricks = [] # creates a new empty list for each dog
def add_trick(self, trick):
self.tricks.append(trick)
Result of above code:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks ['roll over']
>>> e.tricks ['play dead']
In your example, you should move your my_list = [] declaration to the init function...
class A:
string = ""
def __init__(self):
self.my_list = []
def add(self, data):
self.string += "a"
self.my_list.append(data)
def print_list(self):
print(self.string)
print(self.my_list)
a = A()
b = A()
a.add("test")
a.print_list()
b.print_list()
Hope this help.
Regards
Strings are immutable so
self.string += "a"
creates a new object and binds it to self.string
This is clearly mutating the list in place
self.my_list.append(data)
Perhaps more interesting is that
self.my_list += [data]
also mutates the list
The general rule is that __iadd__ does behave differently for mutable vs immutable objects