Example error handling function:
def read_file():
try:
with open(filename, 'rb') as fd:
x = fd.read()
except FileNotFoundError as e:
return(e)
return(x)
I would call the function like so:
file = read_file("test.txt")
if file:
#do something
is there a more efficient/effective way to handle errors than using return multiple times?
It's very strange to catch e and then return it; why would a user of your function want the error to be returned instead of raised? Returning an error doesn't handle it, it just passes responsibility to the caller to handle the error. Letting the error be raised is a more natural way to make the caller responsible for handling it. So it makes sense not to catch the error at all:
def read_file(filename):
with open(filename, 'rb') as fd:
return fd.read()
For your desired use-case where you want to write if file: to test whether the file existed, your read_file function could catch the error and return None, so that your if condition will be falsey:
def read_file(filename):
try:
with open(filename, 'rb') as fd:
return fd.read()
except FileNotFoundError:
return None
However, this means that if the caller isn't aware that the function might return None, you'll get an error from using None where the file data was expected, instead of a FileNotFoundError, and it will be harder to identify the problem in your code.
If you do intend for your function to be called with a filename that might not exist, naming the function something like read_file_if_exists might be a better way to make clear that calling this function with a non-existent filename isn't considered an error.
Related
I'm trying to extract data from trust advisor through lambda function and upload to s3. Some part of the function executes the append module on the data. However, that module block throws error. That specific block is
try:
check_summary = support_client.describe_trusted_advisor_check_summaries(
checkIds=[checks['id']])['summaries'][0]
if check_summary['status'] != 'not_available':
checks_list[checks['category']].append(
[checks['name'], check_summary['status'],
str(check_summary['resourcesSummary']['resourcesProcessed']),
str(check_summary['resourcesSummary']['resourcesFlagged']),
str(check_summary['resourcesSummary']['resourcesSuppressed']),
str(check_summary['resourcesSummary']['resourcesIgnored'])
])
else:
print("unable to append checks")
except:
print('Failed to get check: ' + checks['name'])
traceback.print_exc()
The error logs
unable to append checks
I'm new to Python. So, unsure of how to check for trackback stacks under else: statement. Also, am I doing anything wrong in the above ? Plz help
You are not calling the s3_upload function anywhere, also the code is invalid since it has file_name variable in it which is not initialized.
I've observed your script-
traceback.print_exc() This should be executed before the return statement so that the python compiler can identify the obstacles/errors
if __name__ == '__main__':
lambda_handler
This will work only if is used to execute some code only if the file was run directly, and not imported.
According to the documentation the first three parameters of the put_object method,
def put_object(self, bucket_name, object_name, data, length,
Fix your parameters of put_object.
you're not using s3_upload in your lambda.
let's say that i have a bunch of instruction which might all raise exceptions and I'd like to simply ignore those that fail.
failable_1
failable_2
...
failable_n
Ignoring them with the usual exception pattern might quickly become cumbersome:
try:
failable_1
except SomeError:
pass
try:
failable_2
except SomeError:
pass
...
try:
failable_n
except SomeError:
pass
This is especially true if it is about declaring a list of possibly non existing symbols:
my_list=[optional_1, optional_2, ..., optional_n]
(Let's axiomatically assume that somewhere else in the code, there was something like:
for (var,val) in zip(optional_variable_names_list, values):
exec(var+"="+repr(val))
)...
Because in this case, you cannot even write the name in the code.
my_list=[]
for variable in [optional_1, optional_2, ..., optional_n]: # problem remains here
try:
my_list.append(variable)
except:
pass
wouldn't work. You have to use eval():
my_list=[]
for variable in ["optional_1", "optional_2", ..., "optional_n"]:
try:
my_list.append(eval(variable))
except:
pass
So my question is :
Isn't there a way to write something like the on error next or on error ignore that existed in some old time languages. some kind of :
ignore SomeError:
failable_1
failable_2
...
failable_n
or
ignore NameError:
my_list=[optional_1, optional_2, ..., optional_n]
And if not, why would it be a bad idea ?
You can simplify the pattern somewhat by using a context manager that suppresses an exception by returning True if the exception that occurs is one the those specified with the constructor, or otherwise re-raises the exception:
class Ignore:
def __init__(self, *ignored_exceptions):
self.ignored_exceptions = ignored_exceptions
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if isinstance(exc_value, self.ignored_exceptions):
return True
raise
so that:
with Ignore(NameError, ValueError):
a = b
outputs nothing, while:
with Ignore(NameError, ValueError):
1 / 0
raises ZeroDivisionError: division by zero as expected.
folks.
I'm trying to configure logging from an external yaml configuration file which may or may not have the necessary options forcing me to check and fail over in several different ways. I wrote two solutions doing same thing, but in different styles:
More traditional "C-like":
try:
if config['log']['stream'].lower() == 'console':
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(fmt='scheduler: (%(levelname).1s) %(message)s'))
elif config['log']['stream'].lower() == 'syslog':
raise ValueError
else:
print('scheduler: (E) Failed to set log stream: Unknown stream: \'' + config['log']['stream'] + '\'. Failing over to syslog.', file=sys.stderr)
raise ValueError
except (KeyError, ValueError) as e:
if type(e) == KeyError:
print('scheduler: (E) Failed to set log stream: Stream is undefined. Failing over to syslog.', file=sys.stderr)
handler = logging.handlers.SysLogHandler(facility=logging.handlers.SysLogHandler.LOG_DAEMON, address = '/dev/log')
handler.setFormatter(logging.Formatter(fmt='scheduler[%(process)d]: (%(levelname).1s) %(message)s'))
finally:
log.addHandler(handler)
And "pythonic" with internal procedure:
def setlogstream(stream):
if stream == 'console':
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(fmt='scheduler: (%(levelname).1s) %(message)s'))
elif stream == 'syslog':
handler = logging.handlers.SysLogHandler(facility=logging.handlers.SysLogHandler.LOG_DAEMON, address = '/dev/log')
handler.setFormatter(logging.Formatter(fmt='scheduler[%(process)d]: (%(levelname).1s) %(message)s'))
else:
raise ValueError
log.addHandler(handler)
try:
setlogstream(config['log']['stream'].lower())
except KeyError:
print('scheduler: (E) Failed to set log stream: Stream is undefined. Failing over to syslog.', file=sys.stderr)
setlogstream('syslog')
except ValueError:
print('scheduler: (E) Failed to set log stream: Unknown stream: \'' + config['log']['stream'] + '\'. Failing over to syslog.', file=sys.stderr)
setlogstream('syslog')
They both do what I need, both short, both extendible in case I need more streams, but now I wonder which one is better and why?
Saying one is "better" is mostly a matter of personal preference; if it accomplishes the task it needs to, then pick whichever way you prefer. That said, I think the second one should be used, and here's why:
defining setlogstream() both makes it clear what that section of your code does, and allows you to use it again later, if you need to.
using separate except cases makes your code more readable and easier to follow. this could be especially useful if somehow another error occurred in the handling of the first.
Overall, the second one is far more readable, and your future self will thank you for writing it that way.
I am trying to create TCP server in python with a tornado.
My handle stream method looks like this,
async def handle_stream(self, stream, address):
while True:
try:
stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
break
in the _on_read method, I am trying to read and process the data but whenever a new client connects to the server it gives AssertionError: Already reading error.
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 525, in read_until_close
future = self._set_read_callback(callback)
File "/.local/lib/python3.5/site-packages/tornado/iostream.py", line 860, in _set_read_callback
assert self._read_future is None, "Already reading"
read_until_close asynchronously reads all data from the socket until it is closed. read_until_close has to be called once but the cycle forces the second call that's why you got an error:
on the first iteration, read_until_close sets streaming_callback and returns Future so that you could await it or use later;
on the second iteration, read_until_close raises an exception since you already set a callback on the first iteration.
read_until_close returns Future object and you can await it to make things work:
async def handle_stream(self, stream, address):
try:
await stream.read_until_close(streaming_callback=self._on_read)
except StreamClosedError:
# do something
I need to get the information contained in the exception. This is the code I use.
try:
result = yield user_collection.insert_many(content, ordered=False)
except BulkWriteError as e:
print (e)
And in my test when I get into the except with this line,
self.insert_mock.side_effect = [BulkWriteError('')]
it returns me
batch op errors occurred
instead of a MagicMock or a Mock.
How can I mock the BulkWriteError and give it a default return_value and see it when I use print(e)?
Something like this should allow you to test your print was called correctly.
import builtins # mockout print
class BulkWriteErrorStub(BulkWriteError):
''' Stub out the exception so you can bypass the constructor. '''
def __str__:
return 'fake_error'
#mock.patch.object('builtins', 'print')
def testRaisesBulkWrite(self, mock_print):
...
self.insert_mock.side_effect = [BuilkWriteErrorStub]
with self.assertRaises(...):
mock_print.assert_called_once_with('fake_error')
I haven't tested this so feel free to edit it if I made a mistake.