The ijson module has a documented option allow_comments=True, but when I include it,
an error message is produced:
ValueError: Comments are not supported by the python backend
Below is a transcript using the file test.py:
import ijson
for o in ijson.items(open(0), 'item'):
print(o)
Please note that I have no problem with a similar documented option, multiple_values=True.
Transcript
$ python3 --version
Python 3.10.9
$ python3 test.py <<< [1,2]
1
2
# Now change the call to: ijson.items(open(0), 'item', allow_comments=True)
$ python3 test.py <<< [1,2]
Traceback (most recent call last):
File "/Users/user/test.py", line 5, in <module>
for o in ijson.items(open(0), 'item', allow_comments=True):
File "/usr/local/lib/python3.10/site-packages/ijson/utils.py", line 51, in coros2gen
f = chain(events, *coro_pipeline)
File "/usr/local/lib/python3.10/site-packages/ijson/utils.py", line 29, in chain
f = coro_func(f, *coro_args, **coro_kwargs)
File "/usr/local/lib/python3.10/site-packages/ijson/backends/python.py", line 284, in basic_parse_basecoro
raise ValueError("Comments are not supported by the python backend")
ValueError: Comments are not supported by the python backend
$
Take a look at the Backends section of the documentation, which says:
Ijson provides several implementations of the actual parsing in the form of backends located in ijson/backends:
yajl2_c: a C extension using YAJL 2.x. This is the fastest, but might require a compiler and the YAJL development files to be present when installing this package. Binary wheel distributions exist for major platforms/architectures to spare users from having to compile the package.
yajl2_cffi: wrapper around YAJL 2.x using CFFI.
yajl2: wrapper around YAJL 2.x using ctypes, for when you can’t use CFFI for some reason.
yajl: deprecated YAJL 1.x + ctypes wrapper, for even older systems.
python: pure Python parser, good to use with PyPy
And later on in the FAQ it says:
Q: Are there any differences between the backends?
...
The python backend doesn’t support allow_comments=True It also internally works with str objects, not bytes, but this is an internal detail that users shouldn’t need to worry about, and might change in the future.
If you want support for allow_comments=True, you need to be using one of the yajl based backends. According to the docs:
Importing the top level library as import ijson uses the first available backend in the same order of the list above, and its name is recorded under ijson.backend. If the IJSON_BACKEND environment variable is set its value takes precedence and is used to select the default backend.
You'll need the necessary libraries, etc, installed on your system in order for this to work.
Related
I'm working with the example code for importing an IPython (Jupyter) notebook,
Importing Notebooks. The example code still runs fine, however it generates a warning that I would like to understand and fix:
site-packages/nbformat/current.py:15: UserWarning: nbformat.current is deprecated.
- use nbformat for read/write/validate public API
- use nbformat.vX directly to composing notebooks of a particular version
warnings.warn("""nbformat.current is deprecated.
This warning has been discussed since 2015, at least, and yet I cannot find any constructive advice about what to do about it. Is this a warning that can be addressed by fixing code, or is it a function that will disappear from IPython without a replacement?
If you follow the link to the IPython blog, they claim that there is a newer version, but their link points to a non-existent page.
This code example is widely discussed in other threads in Stack Overflow, for example
python access functions in ipython notebook
Keep in mind the example you've linked up is for a much older version of Jupyter - 4.x. The page with these examples has been relocated at some point, and for the 5.7.6 version of Jupyter (the most recent as of writing this) it's located here.
First, replace the from IPython.nbformat import current import with from nbformat import read
Then, replace this part of the 4.x snippet:
with io.open(path, 'r', encoding='utf-8') as f:
nb = current.read(f, 'json')
with the newer version:
with io.open(path, 'r', encoding='utf-8') as f:
nb = read(f, 4)
Please note I am asking this question for informational purposes only
I know the title sound like a duplicate of Finding the source code for built-in Python functions?. But let me explain.
Say for example, I want to find the source code of most_common method of collections.Counter class. Since the Counter class is implemented in python I could use the inspect module get it's source code.
ie,
>>> import inspect
>>> import collections
>>> print(inspect.getsource(collections.Counter.most_common))
This will print
def most_common(self, n=None):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abcdeabcdabcaba').most_common(3)
[('a', 5), ('b', 4), ('c', 3)]
'''
# Emulate Bag.sortedByCount from Smalltalk
if n is None:
return sorted(self.items(), key=_itemgetter(1), reverse=True)
return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
So if the method or class is implemented in C inspect.getsource will raise TypeError.
>>> my_list = []
>>> print(inspect.getsource(my_list.append))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\abdul.niyas\AppData\Local\Programs\Python\Python36-32\lib\inspect.py", line 968, in getsource
lines, lnum = getsourcelines(object)
File "C:\Users\abdul.niyas\AppData\Local\Programs\Python\Python36-32\lib\inspect.py", line 955, in getsourcelines
lines, lnum = findsource(object)
File "C:\Users\abdul.niyas\AppData\Local\Programs\Python\Python36-32\lib\inspect.py", line 768, in findsource
file = getsourcefile(object)
File "C:\Users\abdul.niyas\AppData\Local\Programs\Python\Python36-32\lib\inspect.py", line 684, in getsourcefile
filename = getfile(object)
File "C:\Users\abdul.niyas\AppData\Local\Programs\Python\Python36-32\lib\inspect.py", line 666, in getfile
'function, traceback, frame, or code object'.format(object))
TypeError: <built-in method append of list object at 0x00D3A378> is not a module, class, method, function, traceback, frame, or code object.
So my question is, Is there is any way(or Using third party package?) that we can find the source code of class or method implemented in C as well?
ie, something like this
>> print(some_how_or_some_custom_package([].append))
int
PyList_Append(PyObject *op, PyObject *newitem)
{
if (PyList_Check(op) && (newitem != NULL))
return app1((PyListObject *)op, newitem);
PyErr_BadInternalCall();
return -1;
}
No, there is not. There is no metadata accessible from Python that will let you find the original source file. Such metadata would have to be created explicitly by the Python developers, without a clear benefit as to what that would achieve.
First and foremost, the vast majority of Python installations do not include the C source code. Next, while you could conceivably expect users of the Python language to be able to read Python source code, Python's userbase is very broad and a large number do not know C or are interested in how the C code works, and finally, even developers that know C can't be expected to have to read the Python C API documentation, something that quickly becomes a requirement if you want to understand the Python codebase.
C files do not directly map to a specific output file, unlike Python bytecode cache files and scripts. Unless you create a debug build with a symbol table, the compiler doesn't retain the source filename in the generated object file (.o) it outputs, nor will the linker record what .o files went into the result it produces. Nor do all C files end up contributing to the same executable or dynamic shared object file; some become part of the Python binary, others become loadable extensions, and the mix is configurable and dependent on what external libraries are available at the time of compilation.
And between makefiles, setup.py and C pre-propressor macros, the combination of input files and what lines of source code are actually used to create each of the output files also varies. Last but not least, because the C source files are no longer consulted at runtime, they can't be expected to still be available in the same original location, so even if there was some metadata stored you still couldn't map that back to the original.
So, it's just easier to just remember a few base rules about how the Python C-API works, then map that back to the C code with a few informed code searches.
Alternatively, download the Python source code and create a debug build, and use a good IDE to help you map symbols and such back to source files. Different compilers, platforms and IDEs have different methods of supporting symbol tables for debugging.
There could be a way if you had the whole debug information (which are usually stripped).
Then you would get to the so or pyd, and use platform specific tools to extract the debug information (stored in the so or in the pdb on Windows) for the required function. You may want to have a look at DWARF information for Linux (on Windows, there is no documentation AFAIK).
In my YAML file I have the below entry:
- type: dir
name: .ssh
chmod: 0o700
According to the YAML 1.2 specification section 3.2.1.3 the 0o700 is the way to specify octals (there is also an example in section 2.4)
However when I process the loaded file and do:
import os
import yaml
filename = "in.yml"
with open(filename) as fp:
for e in yaml.load(open(filename)):
if e['type'] == 'dir':
os.mkdir(e['name'], e['chmod'])
I get TypeError: an integer is required. What is going wrong here?
I am using Python 3.5
What's wrong is that you assume that your YAML library supports the latest version 1.2. That YAML version is from 2009, but you are using PyYaml and that still only supports 1.1. From the non-activity the last few years it seems to be a dead project, so don't expect this to be solved any time soon.
You can add
from yaml.resolver import Resolver
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:int',
re.compile(r'''^(?:[-+]?0b[0-1_]+
|[-+]?0o?[0-7_]+
|[-+]?0[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
list('-+0123456789'))
in your program to add recognition of 0o123 kinda octals (it also still recognizes the 1.1 octals).
Please note that the above only works for Python 3, as PyYaml has different code for Python 2.
You should also consider using pathlib.Path types and their .mkdir() instead of os.mkdir()
Install ruamel.yaml ( pip install ruamel.yaml ). It defaults to loading 1.2 as documented here:
unless the YAML document is loaded with an explicit version==1.1 or the document starts with:
% YAML 1.1
, ruamel.yaml will load the document as version 1.2.
and
YAML 1.2 no longer accepts strings that start with a 0 and solely consist of number characters as octal, you need to specify such strings with 0o[0-7]+ (zero + lower-case o for octal + one or more octal characters).
I have searched this site top to bottom yet have not found a single way to actually accomplish what I want in Python3x. This is a simple toy app so I figured I could write some simple test cases in asserts and call it a day. It does generate reports and such so I would like to make sure my code doesn't do anything wonky upon changes.
My current directory structure is: (only relevant parts included)
project
-model
__init__.py
my_file.py
-test
my_file_test.py
I am having a hell of a time getting my_file_test.py to import my_file.py.
Like I've said. I've searched this site top to bottom and no solution has worked. My version of Python is 3.2.3 running on Fedora 17.
Previously tried attempts:
https://stackoverflow.com/questions/5078590/dynamic-imports-relative-imports-in-python-3
Importing modules from parent folder
Can anyone explain python's relative imports?
How to accomplish relative import in python
In virtually every attempt I get an error to the effect of:
ImportError: No module named *
OR
ValueError: Attempted relative import in non-package
What is going on here. I have tried every accepted answer on SO as well as all over the interwebs. Not doing anything that fancy here but as a .NET/Java/Ruby programmer this is proving to be the absolute definition of intuitiveness.
EDIT: If it matters I tried loading the class that I am trying to import in the REPL and I get the following:
>>> import datafileclass
>>> datafileclass.methods
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
>>> x = datafileclass('sample_data/sample_input.csv')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
If it matters...I know the functionality in the class works but I can't import it which in the now is causing an inability to test. In the future will certainly cause integration issues. (names changed to protect the innocent)
getting within a couple of weeks of desired functionality for this iteration of the library...any help could be useful. Would have done it in Ruby but the client wants the Python as a learning experience,
Structure your code like this:
project
-model
__init__.py
my_file.py
-tests
__init__.py
test_my_file.py
Importantly, your tests directory should also be a module directory (have an empty __init__.py file in it).
Then in test_my_file.py use from model import my_file, and from the top directory run python -m tests.test_my_file. This is invoking test_my_file as a module, which results in Python setting up its import path to include your top level.
Even better, you can use pytest or nose, and running py.test will pick up the tests automatically.
I realise this doesn't answer your question, but it's going to be a lot easier for you to work with Python standard practices rather than against them. That means structuring your project with tests in their own top-level directory.
I can't seem to get cairo regions working in within
using the gintrospection.
For example
from gi.repository import cairo
reg = cairo.Region()
will give me
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
and trying to get a region from Gdk.get_clip_region() will give me
return info.invoke(*args)
TypeError: Couldn't find conversion for foreign struct 'cairo.Region'
What obvious thing am I missing? I can't find a way to iniatilize the library, and can't imagine you would need to for regions which seem like a simple struct. I don't know why gdk can't find the cairo types, and am not aware if I"m supposed to show it the way somehow.
Apparently you need to use the regular cairo bindings, even when you use introspection for everything else.
So just import cairo.
(I'm not sure why gi.repository.cairo exists...)
And the "Couldn't find conversion" error will go away when you have all the necessary libraries (e.g. on Ubuntu you need the python-gi-cairo package in addition to python-cairo (or the equivalent python3 packages)).