I am wondering if something counts as a name if it is expressed as an object.attribute syntax. The motivation comes from trying to understand this code from Learning Python:
def makeopen(id):
original = builtins.open
def custom(*pargs, **kargs):
print('Custom open call %r' %id, pargs, kargs)
return original(*pargs,*kargs)
builtins.open(custom)
I wanted to map out each name/variable to the scope that they exist in. I am unsure what to do with builtins.open. Is builtins.open a name? In the book the author does state that object.attribute lookup follows completely different rules to plain looksups, which would mean to me that builtins.open is not a name at all, since the execution model docs say that scopes define where names are visible. Since object.attribute syntax is visible in any scope, it doesn't fit into this classification and is not a name.
However the conceptual problem I have is then defining what builtins.open is? It is still a reference to an object, and can be rebound to any other object. In that sense it is a name, even although it doesn't follow scope rules?
Thank you.
builtins.open is just another way to access the global open function:
import builtins
print(open)
# <built-in function open>
print(builtins.open)
# <built-in function open>
print(open == builtins.open)
# True
From the docs:
This module provides direct access to all ‘built-in’ identifiers of
Python; for example, builtins.open is the full name for the built-in
function open()
Regarding the second part of your question, I'm not sure what you mean. (Almost) every "name" in Python can be reassigned to something completely different.
>>> list
<class 'list'>
>>> list = 1
>>> list
1
However, everything under builtins is protected, otherwise some nasty weird behavior was bound to happen in case someone(thing) reassigned its attributes during runtime.
>>> import builtins
>>> builtins.list = 1
Traceback (most recent call last):
File "C:\Program Files\PyCharm 2018.2.1\helpers\pydev\_pydev_comm\server.py", line 34, in handle
self.processor.process(iprot, oprot)
File "C:\Program Files\PyCharm 2018.2.1\helpers\third_party\thriftpy\_shaded_thriftpy\thrift.py", line 266, in process
self.handle_exception(e, result)
File "C:\Program Files\PyCharm 2018.2.1\helpers\third_party\thriftpy\_shaded_thriftpy\thrift.py", line 254, in handle_exception
raise e
File "C:\Program Files\PyCharm 2018.2.1\helpers\third_party\thriftpy\_shaded_thriftpy\thrift.py", line 263, in process
result.success = call()
File "C:\Program Files\PyCharm 2018.2.1\helpers\third_party\thriftpy\_shaded_thriftpy\thrift.py", line 228, in call
return f(*(args.__dict__[k] for k in api_args))
File "C:\Program Files\PyCharm 2018.2.1\helpers\pydev\_pydev_bundle\pydev_console_utils.py", line 217, in getFrame
return pydevd_thrift.frame_vars_to_struct(self.get_namespace(), hidden_ns)
File "C:\Program Files\PyCharm 2018.2.1\helpers\pydev\_pydevd_bundle\pydevd_thrift.py", line 239, in frame_vars_to_struct
keys = dict_keys(frame_f_locals)
File "C:\Program Files\PyCharm 2018.2.1\helpers\pydev\_pydevd_bundle\pydevd_constants.py", line 173, in dict_keys
return list(d.keys())
TypeError: 'int' object is not callable
Related
I'm getting a nonsymetric behavior when using Path.relative_to versus os.path.relpath, see examples below. In Correspondence to tools in the os module, I was guided to believe they behave the same.
I'm working with two paths here
C:\Sync\Rmaster_head_\bin
C:\Sync\installed
I'm using Python 3.9.15.
os.path.relpath
>>> import os.path
>>> import pathlib
>>> start = pathlib.Path(r"../../installed")
>>> rel_path = os.path.relpath(pathlib.Path(r"C:/Sync/Rmaster_head_/bin"), start=start)
>>> rel_path
'..\\..\\..\\..\\Sync\\Rmaster_head_\\bin'
>>> start / pathlib.Path(rel_path)
WindowsPath('../../installed/../../../../Sync/Rmaster_head_/bin')
>>> (start / pathlib.Path(rel_path)).resolve()
WindowsPath('C:/Sync/Rmaster_head_/bin')
pathlib.Path.relative_to in both directions
>>> pathlib.Path(r"C:/Sync/Rmaster_head_/bin").relative_to(pathlib.Path(r"../../installed"))
Traceback (most recent call last):
File "C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\pathlib.py", line 939, in relative_to
raise ValueError("{!r} is not in the subpath of {!r}"
ValueError: 'C:\\Sync\\Rmaster_head_\\bin' is not in the subpath of '..\\..\\installed' OR one path is relative and the other is absolute.
>>> pathlib.Path(r"../../installed").relative_to(pathlib.Path(r"C:/Sync/Rmaster_head_/bin"))
Traceback (most recent call last):
File "C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\pathlib.py", line 939, in relative_to
raise ValueError("{!r} is not in the subpath of {!r}"
ValueError: '..\\..\\installed' is not in the subpath of 'C:\\Sync\\Rmaster_head_\\bin' OR one path is relative and the other is absolute.
I've noticed that the exception says that one of the paths is relative, but it doesn't also work when using full paths.
>>> path1 = pathlib.Path(r"C:\Sync\Rmaster_head_\bin")
>>> path2 = pathlib.Path(r"C:\Sync\installed\R2023.1.175_install\documentation")
>>> os.path.relpath(path1, start=path2)
'..\\..\\..\\Rmaster_head_\\bin'
>>> os.path.relpath(path2, start=path1)
'..\\..\\installed\\R2023.1.175_install\\documentation'
and path1.relative_to(path2) and path2.relative_to(path1) both fail.
What am I missing?
What you're missing is the notes about these methods in the pathlib documentation:
Below the documentation of relative_to():
NOTE: This function is part of PurePath and works with strings. It does not check or access the underlying file structure.
Although it would still be possible to derive the relative path you're looking for based on strings, apparently the method doesn't do that. The example and the error message are clear about this.
That the function is different than os.path.relpath() is also explicitly mentions at the top of the section you mentioned in your question:
Note: Although os.path.relpath() and PurePath.relative_to() have some overlapping use-cases, their semantics differ enough to warrant not considering them equivalent.
Unfortunately, this (along with some other things) means that pathlib cannot be used to fully replace the os functions. In many cases you'll have to use a mix of both :(
For the same object definition in the same program I get TypeError: 'G' object is not iterable in one place and not in another. I am wondering why and if there is a workaround?
class G: # generator class
def __init__(self):
self.c = 1
def __next__(self):
yield self.c
g = G()
for i in range(3):
print(next(g))
g = G()
i = 0
for x in g:
print(x)
i += 1
if i >= 3:
break
In the first for loop it works but in the second for loop I get the runtime error message TypeError: 'G' object is not iterable
The exact output is here:
<generator object G.__next__ at 0x000001ECA83E43C0>
<generator object G.__next__ at 0x000001ECA83E43C0>
<generator object G.__next__ at 0x000001ECA83E43C0>
Traceback (most recent call last):
File "C:\Users\15104\AppData\Local\Programs\Python\Python310\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm 2022.2\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2022.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/15104/Dropbox/Programming/Python/try_tesserOCR/try1/try_with1.py", line 16, in <module>
for x in g:
TypeError: 'G' object is not iterable
Although in this simple example one might say "why would anybody do that", this is because it is taken out of a longer program. Taking out the second g = G() doesn't change the behavior.
The only difference I see in the code is that the first loop calls the next() function explicitly and the second one calls it implicitly through the for x in g: statement. However it seems like "for" is perfectly valid way to use an iterable.
I am trying to create a dict of z_scores by filtering a dataframe based upon five locations.
No matter which location is first in the list, I always get the first key:value pair placed into
the dict, and no matter which location is second, I always get this error:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2018.2.1\plugins\python\helpers\pydev\pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2018.2.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Mark/PycharmProjects/main/main.py", line 104, in <module>
z_score = z_score(base, df['SalePrice'])
TypeError: 'numpy.float64' object is not callable
Since every list value works when it is first, I don't see why every subsequent iteration fails.
My code:
def z_score(val, array, bessel=0):
mean = array.mean()
st_dev = std(array, ddof=bessel)
distance = val - mean
z = distance / st_dev
return z
neighborhoods = ['NAmes', 'CollgCr', 'OldTown', 'Edwards', 'Somerst']
base = 200000
z_scores = {}
for neighborhood in neighborhoods:
df = houses.loc[houses['Neighborhood'] == neighborhood]
z_score = z_score(base, df['SalePrice'])
z_scores[neighborhood] = z_score
sorted_z_scores = sorted(z_scores.items(), key=lambda x: x[1], reverse=True)
print(sorted_z_scores)
In python, the interpreter puts a higher priority on your variable names in the local scope than method names, so when you use z_score as a variable name, it masks access to the z_score method name, if you change the name of your z_score variable, your code should run.
Traceback (most recent call last):
File "dac.py", line 87, in
X_train=load_create_padded_data(X_train=X_train,savetokenizer=False,isPaddingDone=False,maxlen=sequence_length,tokenizer_path='./New_Tokenizer.tkn')
File "/home/dpk/Downloads/DAC/New_Utils.py", line 92, in load_create_padded_data
X_train=tokenizer.texts_to_sequences(X_train)
File "/home/dpk/anaconda2/envs/venv/lib/python2.7/site-packages/keras_preprocessing/text.py", line 278, in texts_to_sequences
return list(self.texts_to_sequences_generator(texts))
File "/home/dpk/anaconda2/envs/venv/lib/python2.7/site-packages/keras_preprocessing/text.py", line 296, in texts_to_sequences_generator
oov_token_index = self.word_index.get(self.oov_token)
AttributeError: 'Tokenizer' object has no attribute 'oov_token'
Probably this one:
You can manually set tokenizer.oov_token = None to fix this.
Pickle is not a reliable way to serialize objects since it assumes
that the underlying Python code/modules you're importing have not
changed. In general, DO NOT use pickled objects with a different
version of the library than what was used at pickling time. That's not
a Keras issue, it's a generic Python/Pickle
https://github.com/keras-team/keras/issues/9099
To fix this I manually set
self.oov_token = None
But not
tokenizer.oov_token = None
I would like to specify a schema field wichi accepts one or many resources. However I seem only able to specify one behavior or the other.
>>> class Resource(marshmallow.Schema):
... data = marshmallow.fields.Dict()
...
>>> class ContainerSchema(marshmallow.Schema):
... resource = marshmallow.fields.Nested(ResourceSchema, many=True)
...
>>> ContainerSchema().dump({'resource': [{'data': 'DATA'}]})
MarshalResult(data={'resource': [{'data': 'DATA'}]}, errors={})
In the above example a list must be defined. However I would prefer not to:
>>> ContainerSchema().dump({'resource': {'data': 'DATA'}})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lib64/python3.6/site-packages/marshmallow/schema.py", line 513, in dump
**kwargs
File "/lib64/python3.6/site-packages/marshmallow/marshalling.py", line 147, in serialize
index=(index if index_errors else None)
File "/lib64/python3.6/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
value = getter_func(data)
File "/lib64/python3.6/site-packages/marshmallow/marshalling.py", line 141, in <lambda>
getter = lambda d: field_obj.serialize(attr_name, d, accessor=accessor)
File "/lib64/python3.6/site-packages/marshmallow/fields.py", line 252, in serialize
return self._serialize(value, attr, obj)
File "/lib64/python3.6/site-packages/marshmallow/fields.py", line 448, in _serialize
schema._update_fields(obj=nested_obj, many=self.many)
File "/lib64/python3.6/site-packages/marshmallow/schema.py", line 760, in _update_fields
ret = self.__filter_fields(field_names, obj, many=many)
File "/lib64/python3.6/site-packages/marshmallow/schema.py", line 810, in __filter_fields
obj_prototype = obj[0]
KeyError: 0
Can I have a schema allowing both a single item or many of it?
The point with giving the arguments as a list - whether it's one or many - is so the schema knows how to handle it in either case. For the schema to process arguments of a different format, like not in a list, you need to add a preprocessor to the schema, like this:
class ContainerSchema(marshmallow.Schema):
resource = marshmallow.fields.Nested(ResourceSchema, many=True)
#pre_dump
def wrap_indata(self, indata):
if type(indata['resource']) is dict:
indata['resource'] = [indata['resource']]