Using 'with' with 'next()' in python 3 [duplicate] - python-3.x

I came across the Python with statement for the first time today. I've been using Python lightly for several months and didn't even know of its existence! Given its somewhat obscure status, I thought it would be worth asking:
What is the Python with statement
designed to be used for?
What do
you use it for?
Are there any
gotchas I need to be aware of, or
common anti-patterns associated with
its use? Any cases where it is better use try..finally than with?
Why isn't it used more widely?
Which standard library classes are compatible with it?

I believe this has already been answered by other users before me, so I only add it for the sake of completeness: the with statement simplifies exception handling by encapsulating common preparation and cleanup tasks in so-called context managers. More details can be found in PEP 343. For instance, the open statement is a context manager in itself, which lets you open a file, keep it open as long as the execution is in the context of the with statement where you used it, and close it as soon as you leave the context, no matter whether you have left it because of an exception or during regular control flow. The with statement can thus be used in ways similar to the RAII pattern in C++: some resource is acquired by the with statement and released when you leave the with context.
Some examples are: opening files using with open(filename) as fp:, acquiring locks using with lock: (where lock is an instance of threading.Lock). You can also construct your own context managers using the contextmanager decorator from contextlib. For instance, I often use this when I have to change the current directory temporarily and then return to where I was:
from contextlib import contextmanager
import os
#contextmanager
def working_directory(path):
current_dir = os.getcwd()
os.chdir(path)
try:
yield
finally:
os.chdir(current_dir)
with working_directory("data/stuff"):
# do something within data/stuff
# here I am back again in the original working directory
Here's another example that temporarily redirects sys.stdin, sys.stdout and sys.stderr to some other file handle and restores them later:
from contextlib import contextmanager
import sys
#contextmanager
def redirected(**kwds):
stream_names = ["stdin", "stdout", "stderr"]
old_streams = {}
try:
for sname in stream_names:
stream = kwds.get(sname, None)
if stream is not None and stream != getattr(sys, sname):
old_streams[sname] = getattr(sys, sname)
setattr(sys, sname, stream)
yield
finally:
for sname, stream in old_streams.iteritems():
setattr(sys, sname, stream)
with redirected(stdout=open("/tmp/log.txt", "w")):
# these print statements will go to /tmp/log.txt
print "Test entry 1"
print "Test entry 2"
# back to the normal stdout
print "Back to normal stdout again"
And finally, another example that creates a temporary folder and cleans it up when leaving the context:
from tempfile import mkdtemp
from shutil import rmtree
#contextmanager
def temporary_dir(*args, **kwds):
name = mkdtemp(*args, **kwds)
try:
yield name
finally:
shutil.rmtree(name)
with temporary_dir() as dirname:
# do whatever you want

I would suggest two interesting lectures:
PEP 343 The "with" Statement
Effbot Understanding Python's
"with" statement
1.
The with statement is used to wrap the execution of a block with methods defined by a context manager. This allows common try...except...finally usage patterns to be encapsulated for convenient reuse.
2.
You could do something like:
with open("foo.txt") as foo_file:
data = foo_file.read()
OR
from contextlib import nested
with nested(A(), B(), C()) as (X, Y, Z):
do_something()
OR (Python 3.1)
with open('data') as input_file, open('result', 'w') as output_file:
for line in input_file:
output_file.write(parse(line))
OR
lock = threading.Lock()
with lock:
# Critical section of code
3.
I don't see any Antipattern here.
Quoting Dive into Python:
try..finally is good. with is better.
4.
I guess it's related to programmers's habit to use try..catch..finally statement from other languages.

The Python with statement is built-in language support of the Resource Acquisition Is Initialization idiom commonly used in C++. It is intended to allow safe acquisition and release of operating system resources.
The with statement creates resources within a scope/block. You write your code using the resources within the block. When the block exits the resources are cleanly released regardless of the outcome of the code in the block (that is whether the block exits normally or because of an exception).
Many resources in the Python library that obey the protocol required by the with statement and so can used with it out-of-the-box. However anyone can make resources that can be used in a with statement by implementing the well documented protocol: PEP 0343
Use it whenever you acquire resources in your application that must be explicitly relinquished such as files, network connections, locks and the like.

Again for completeness I'll add my most useful use-case for with statements.
I do a lot of scientific computing and for some activities I need the Decimal library for arbitrary precision calculations. Some part of my code I need high precision and for most other parts I need less precision.
I set my default precision to a low number and then use with to get a more precise answer for some sections:
from decimal import localcontext
with localcontext() as ctx:
ctx.prec = 42 # Perform a high precision calculation
s = calculate_something()
s = +s # Round the final result back to the default precision
I use this a lot with the Hypergeometric Test which requires the division of large numbers resulting form factorials. When you do genomic scale calculations you have to be careful of round-off and overflow errors.

An example of an antipattern might be to use the with inside a loop when it would be more efficient to have the with outside the loop
for example
for row in lines:
with open("outfile","a") as f:
f.write(row)
vs
with open("outfile","a") as f:
for row in lines:
f.write(row)
The first way is opening and closing the file for each row which may cause performance problems compared to the second way with opens and closes the file just once.

See PEP 343 - The 'with' statement, there is an example section at the end.
... new statement "with" to the Python
language to make
it possible to factor out standard uses of try/finally statements.

points 1, 2, and 3 being reasonably well covered:
4: it is relatively new, only available in python2.6+ (or python2.5 using from __future__ import with_statement)

The with statement works with so-called context managers:
http://docs.python.org/release/2.5.2/lib/typecontextmanager.html
The idea is to simplify exception handling by doing the necessary cleanup after leaving the 'with' block. Some of the python built-ins already work as context managers.

Another example for out-of-the-box support, and one that might be a bit baffling at first when you are used to the way built-in open() behaves, are connection objects of popular database modules such as:
sqlite3
psycopg2
cx_oracle
The connection objects are context managers and as such can be used out-of-the-box in a with-statement, however when using the above note that:
When the with-block is finished, either with an exception or without, the connection is not closed. In case the with-block finishes with an exception, the transaction is rolled back, otherwise the transaction is commited.
This means that the programmer has to take care to close the connection themselves, but allows to acquire a connection, and use it in multiple with-statements, as shown in the psycopg2 docs:
conn = psycopg2.connect(DSN)
with conn:
with conn.cursor() as curs:
curs.execute(SQL1)
with conn:
with conn.cursor() as curs:
curs.execute(SQL2)
conn.close()
In the example above, you'll note that the cursor objects of psycopg2 also are context managers. From the relevant documentation on the behavior:
When a cursor exits the with-block it is closed, releasing any resource eventually associated with it. The state of the transaction is not affected.

In python generally “with” statement is used to open a file, process the data present in the file, and also to close the file without calling a close() method. “with” statement makes the exception handling simpler by providing cleanup activities.
General form of with:
with open(“file name”, “mode”) as file_var:
processing statements
note: no need to close the file by calling close() upon file_var.close()

The answers here are great, but just to add a simple one that helped me:
with open("foo.txt") as file:
data = file.read()
open returns a file
Since 2.6 python added the methods __enter__ and __exit__ to file.
with is like a for loop that calls __enter__, runs the loop once and then calls __exit__
with works with any instance that has __enter__ and __exit__
a file is locked and not re-usable by other processes until it's closed, __exit__ closes it.
source: http://web.archive.org/web/20180310054708/http://effbot.org/zone/python-with-statement.htm

Related

Lock is not recognized in functions of different python modules

I have a multiprocessing Lock, that I define as
import multiprocessing
lock1 = mp.Lock()
To share this lock among the different child processes I do:
def setup_process(lock1):
global lock_1
lock_1 = lock1
pool = mp.Pool(os.cpu_count() - 1,
initializer=setup_process,
initargs=[lock1])
Now I've noticed that if the processes call the following function, and the function is defined in the same python module (i.e., same file):
def test_func():
print("lock_1:", lock_1)
with lock_1:
print(str(mp.current_process()) + " has the lock in test function.")
I get an output like:
lock_1 <Lock(owner=None)>
<ForkProcess name='ForkPoolWorker-1' parent=82414 started daemon> has the lock in test function.
lock_1 <Lock(owner=None)>
<ForkProcess name='ForkPoolWorker-2' parent=82414 started daemon> has the lock in test function.
lock_1 <Lock(owner=None)>
<ForkProcess name='ForkPoolWorker-3' parent=82414 started daemon> has the lock in test function.
However, if test_function is defined in a different file, the Lock is not recognized, and I get:
NameError:
name 'lock_1' is not defined
This seems to happen for every function, where the important distinction is whether the function is defined in this module or in another one. I'm sure I'm missing something very obvious with the global variables, but I'm new to this and I haven't been able to figure it out. How can I make the Locks be recognized everywhere?
Well, I learned something new about python today: global isn't actually truly global. It only is accessible at the module scope.
There are a multitude of ways of sharing your lock with the module in order to allow it to be used, and the docs even suggest a "canonical" way of sharing globals between modules (though I don't feel it's the most appropriate for this situation). To me this situation illustrates one of the short fallings of using globals in the first place, though I have to admit in the specific case of multiprocessing.Pool initializers it seems to be the accepted or even intended use case to use globals to pass data to worker functions. It actually makes sense that globals can't cross module boundaries because that would make the separate module 100% dependent on being executed by a specific script, so it can't really be considered a separate independent library. Instead it could just be included in the same file. I recognize that may be at odds with splitting things up not to create re-usable libraries but simply just to organize code logically in shorter to read segments, but that's apparently a stylistic choice by the designers of python.
To solve your problem, at the end of the day, you are going to have to pass the lock to the other module as an argument, so you might as well make test_func recieve lock_1 as an argument. You may have found however that this will cause a RuntimeError: Lock objects should only be shared between processes through inheritance message, so what to do? Basically, I would keep your initializer, and put test_func in another function which is in the __main__ scope (and therefore has access to your global lock_1) which grabs the lock, and then passes it to the function. Unfortunately we can't use a nicer looking decorator or a wrapper function, because those return a function which only exists in a local scope, and can't be imported when using "spawn" as the start method.
from multiprocessing import Pool, Lock, current_process
def init(l):
global lock_1
lock_1 = l
def local_test_func(shared_lock):
with shared_lock:
print(f"{current_process()} has the lock in local_test_func")
def local_wrapper():
global lock_1
local_test_func(lock_1)
from mymodule import module_test_func #same as local_test_func basically...
def module_wrapper():
global lock_1
module_test_func(lock_1)
if __name__ == "__main__":
l = Lock()
with Pool(initializer=init, initargs=(l,)) as p:
p.apply(local_wrapper)
p.apply(module_wrapper)

How to work around opencv bug crashing on imshow when not in main thread [duplicate]

This has been answered for Android, Objective C and C++ before, but apparently not for Python. How do I reliably determine whether the current thread is the main thread? I can think of a few approaches, none of which really satisfy me, considering it could be as easy as comparing to threading.MainThread if it existed.
Check the thread name
The main thread is instantiated in threading.py like this:
Thread.__init__(self, name="MainThread")
so one could do
if threading.current_thread().name == 'MainThread'
but is this name fixed? Other codes I have seen checked whether MainThread is contained anywhere in the thread's name.
Store the starting thread
I could store a reference to the starting thread the moment the program starts up, i.e. while there are no other threads yet. This would be absolutely reliable, but way too cumbersome for such a simple query?
Is there a more concise way of doing this?
The problem with threading.current_thread().name == 'MainThread' is that one can always do:
threading.current_thread().name = 'MyName'
assert threading.current_thread().name == 'MainThread' # will fail
Perhaps the following is more solid:
threading.current_thread().__class__.__name__ == '_MainThread'
Having said that, one may still cunningly do:
threading.current_thread().__class__.__name__ = 'Grrrr'
assert threading.current_thread().__class__.__name__ == '_MainThread' # will fail
But this option still seems better; "after all, we're all consenting adults here."
UPDATE:
Python 3.4 introduced threading.main_thread() which is much better than the above:
assert threading.current_thread() is threading.main_thread()
UPDATE 2:
For Python < 3.4, perhaps the best option is:
isinstance(threading.current_thread(), threading._MainThread)
The answers here are old and/or bad, so here's a current solution:
if threading.current_thread() is threading.main_thread():
...
This method is available since Python 3.4+.
If, like me, accessing protected attributes gives you the Heebie-jeebies, you may want an alternative for using threading._MainThread, as suggested. In that case, you may exploit the fact that only the Main Thread can handle signals, so the following can do the job:
import signal
def is_main_thread():
try:
# Backup the current signal handler
back_up = signal.signal(signal.SIGINT, signal.SIG_DFL)
except ValueError:
# Only Main Thread can handle signals
return False
# Restore signal handler
signal.signal(signal.SIGINT, back_up)
return True
Updated to address potential issue as pointed out by #user4815162342.

Does using a context manager in a generator may lead to resources leak?

I have a function that yields from a context manager:
def producer(pathname):
with open(pathname) as f:
while True:
chunk = f.read(4)
if not chunk:
break
yield chunk
It is not a problem when the generator is entirely consumed since, during the last iteration, the generator resumes execution after the yield statement, and the loop breaks and we nicely exit the context manager.
However, if the generator is only partially consumed, and there are no more consumers to consume it entirely, will the generator remain suspended forever? In that case, we will never exit from the context manager. Would that mean the file will remain open for the rest of the program execution? Or at least until the generator is garbage collected? Is this a corner case I should take care of by myself, or can I rely on the Python runtime to close dangling context manager in time?
FWIW, I've seen Generator and context manager at the same time and How to use a python context manager inside a generator but I don't think they really answer the same question. Unless I missed something?
If you fail to consume the whole generator, the context manager won't be cleaned until the generator is garbage collected, which may take quite a while if reference cycles are involved, or you're running on a non-CPython interpreter.
You can work around this by close-ing the generator-iterator; all generator functions provide a close method on the resulting generator-iterator that raises GeneratorExit inside it; the exception bubbles out of with statements and the like to make sure they're properly cleaned deterministically.
To make it occur at a guaranteed point in time, you can use contextlib.closing to get guaranteed closing of the generator itself:
from contextlib import closing
with closing(producer(mypath)) as produced_items:
for item in produced_items:
# Do stuff, maybe break loop early
Even if you break, return, or raise an exception, the with controlling produced_items will close it, which will in turn invoke cleanup for the with statements within it.

seek(0) on Linux /proc/sys/* pseudo-files?

Are there documented standards for the semantics of Linux /proc/sys file descriptors?
Is it proper to use seek(0) on them?
Here's a piece of code which seems to work fine for my tests:
#!/usr/bin/python
from time import sleep
with open('/proc/sys/fs/file-nr','r') as f:
while True:
d = f.readline()
print d.split()[0]
f.seek(0)
sleep(1)
This seems to work. However, I'd like to know if that's the right way to do such things or if I should loop over open() ... read() ... close()
In this particular case I'll be using this with the collectd Python plugin ... so this particular code would be running indefinitely in a daemon. However, I'm interested in the answer for the general class of questions.
(Incidentally is there an "open files/inodes" module/plugin for collectd)?
Yes, it is proper to use lseek(2) and fseek(3) on files on proc pseudo-file system. Calls which aren't appropriate will result and error, thus if python seek (calling presumably lseek/fseek underneath) works, it's appropriate.

Should I use coroutines or another scheduling object here?

I currently have code in the form of a generator which calls an IO-bound task. The generator actually calls sub-generators as well, so a more general solution would be appreciated.
Something like the following:
def processed_values(list_of_io_tasks):
for task in list_of_io_tasks:
value = slow_io_call(task)
yield postprocess(value) # in real version, would iterate over
# processed_values2(value) here
I have complete control over slow_io_call, and I don't care in which order I get the items from processed_values. Is there something like coroutines I can use to get the yielded results in the fastest order by turning slow_io_call into an asynchronous function and using whichever call returns fastest? I expect list_of_io_tasks to be at least thousands of entries long. I've never done any parallel work other than with explicit threading, and in particular I've never used the various forms of lightweight threading which are available.
I need to use the standard CPython implementation, and I'm running on Linux.
Sounds like you are in search of multiprocessing.Pool(), specifically the Pool.imap_unordered() method.
Here is a port of your function to use imap_unordered() to parallelize calls to slow_io_call().
def processed_values(list_of_io_tasks):
pool = multiprocessing.Pool(4) # num workers
results = pool.imap_unordered(slow_io_call, list_of_io_tasks)
while True:
yield results.next(9999999) # large time-out
Note that you could also iterate over results directly (i.e. for item in results: yield item) without a while True loop, however calling results.next() with a time-out value works around this multiprocessing keyboard interrupt bug and allows you to kill the main process and all subprocesses with Ctrl-C. Also note that the StopIteration exceptions are not caught in this function but one will be raised when results.next() has no more items return. This is legal from generator functions, such as this one, which are expected to either raise StopIteration errors when there are no more values to yield or just stop yielding and a StopIteration exception will be raised on it's behalf.
To use threads in place of processes, replace
import multiprocessing
with
import multiprocessing.dummy as multiprocessing

Resources