How can I write a log entry – serialising information to text – that captures all the useful information about an exception?
In Python 3, exception instances got considerably more awesome, but also more complex. They can have a __cause__ or not, can have a __context__ or not, they can have a __traceback__ or not, they have a type, and probably more wrinkles that I'm missing.
The information needed to diagnose an exception later can include any or all of this. It can also, of course, include the same information for any related (cause, or context, or …?) exception instance, recursively.
At the point of logging a caught exception (prior to re-raising it), what single “get me all the information” technique is there which I can use to record the full diagnostic information contained in the exception object, as text which will be useful to the eventual diagnosis effort?
Related
If CQS prevents commands from returning status variables, how does one code for commands that may not succeed? Let's say you can't rely on exceptions.
It seems like anything that is request/response is a violation of CQS.
So it would seem like you would have a set of "mother may I" methods giving the statuses that would have been returned by the command. What happens in a multithreaded / multi computer application?
If I have three clients looking to request that a server's object increase by one (and the object has limits 0-100). All check to see if they can but one gets it - and the other two can't because it just hit a limit. It would seem like a returned status would solve the problem here.
It seems like anything that is request/response is a violation of CQS.
Pretty much yes, hence Command-Query-Separation. As Martin Fowler nicely puts it:
The fundamental idea is that we should divide an object's methods into two sharply separated categories:
Queries: Return a result and do not change the observable state of the system (are free of side effects).
Commands: Change the state of a system but do not return a value [my emphasis].
Requesting that a server's object increase by one is a Command, so it should not return a value - processing a response to that request means that you are doing a Command and Query action at the same time which breaks the fundamental tenet of CQS.
So if you want to know what the server's value is, you issue a separate Query.
If you really need a request-response pattern, you either need to have something like a convoluted callback event process to issue queries for the status of a specific request, or pure CQS isn't appropriate for this part of your system - note the word pure.
Multithreading is a main drawback of CQS and can make it can hard to do. Wikipedia has a basic example and discussion of this and also links to the same Martin Fowler article where he suggests it is OK to break the pattern to get something done without driving yourself crazy:
[Bertrand] Meyer [the inventor of CQS] likes to use command-query separation absolutely, but there are
exceptions. Popping a stack is a good example of a query that modifies
state. Meyer correctly says that you can avoid having this method, but
it is a useful idiom. So I prefer to follow this principle when I can,
but I'm prepared to break it to get my pop.
TL;DR - I would probably just look at returning a response, even tho it isn't correct CQS.
Article "Race Conditions Don’t Exist" may help you to look at the problem with CQS/CQRS mindset.
You may want to step back and ask why counter value is absolutely necessary to know before sending a command? Apparently, you want to make decision on the client side whether you can increase counter more or not.
The approach is to let the server make such decision. Let all the clients send commands (some will succeed and some will fail). Eventually clients will get consistent view of the server object state (where limit has reached) and may finally stop sending such commands.
This time window of inconsistency leads to wrong decisions by the clients, but it never breaks consistency of the object (or domain model) on the server side as long as commands are handled adequately.
The CQS principle says every method should either be a command that performs an action, or a query that returns data to the caller, but not both.
It makes sense for a Query not to do anything else, because you don't expect a query to change the state.
But it looks harmless if a Command returns some extra piece of information. You can either use the returned value or ignore it. Why does the CQS principle require a Command not to return any values?
But it looks harmless if a Command returns some extra piece of information?
It often is. It sometimes isn't.
People can start confusing queries for commands, or calling commands more for the information it returns than for its effect (along with "clever" ways of preventing that effect from being a real effect, that can be brittle).
It can lead to gaps in an interface. If the only use-case people can envision for a particular query is hand-in-hand with a particular command, it may seem pointless to add the pure form of the query (e.g. writing a stack with a Pop() but no Peek()) which can restrict the flexibility of the component in the face of future changes.
In a way, "looks harmless" is exactly what CQS is warning you about, in banning such constructs.
Now, that isn't to say that you might not still decide that a particular command-query combination isn't so useful to be worth it, but in weighing up the pros and cons of such a decision, CQS is always a voice arguing against it.
From my understanding, one of the benefits of CQS is how well it works in distributed environments. Commands become their own isolated unit that could be immediately executed, placed in a queue to be executed at a later date, executed by a remote event handler etc.
If the commander interface were to specify a return type then you greatly affect the strength of the CQS pattern in its ability to fit well within a distributed model.
The common approach to solving this problem (see this article for instance by Mark Seemann) is to generate a unique ID such as a guid which is unique to the event executed by the command handler. This is then persisted to allow the data to be identified at a later date.
I'd like to build an OO hierarchy of errors and warnings returned to the client during a, let's say, pricing operation:
interface PricingMessage {}
interface PricingWarning extends PricingMessage {}
interface PricingError extends PricingMessage {}
class NoSuchProductError implements PricingError {
...
}
I'm not very keen on the name PricingMessage. What is the concept that includes errors and warnings?
EDIT: To be clear, I'm looking for a common concept or name for errors and warnings specifically (excluding e.g. general info messages). For instance, compilers also report errors and warnings. What are these?
Some suggestions...
"Result"
An operation has results, which could be some number of errors, warnings, notes and explicit or implied (no errors) success.
PricingResult
"Issue"
An operation ran, but with issues, some of which could be fatal (errors) and some of which might not be (warnings). If there are no issues, success is implied.
PricingIssue
"Condition"
An operation might have or be in an error condition or a warning condition. I think some compilers use "condition" as an abstract term for an error or warning.
PricingCondition
"Diagnostic"
The outcome of an operation might include diagnostics for errors and warnings. Another compiler term, I believe.
PricingDiagnostic
If you were dealing with java, or similar OO languages, the word you are looking for would be Exception. This indicates that you have reached an "exceptional" condition which needs to be handled in a controlled way.
Through looking at a few synonym lists, I found the following:
Anomaly, oddity, deviation
Alert, message, notification
Fault, misstep, failure, glitch
I prefer the name Alert. An alert IMO can have any level of severity, it could be identified as informational, warning, critical or any other level deemed appropriate.
The counter argument I have heard to this naming is the idea that alert the noun follows too closely to alert the verb, making the distinction that the object (noun) may or many not have been brought to the users attention yet (verb). In this context naming them alert could creates a bit of cognitive dissonance, and perhaps confusion for developers reasoning about your code.
The best I can propose would be to create a hard distinction be made in your code base between Alert (the object of exceptional condition) and Notification (the act of bringing the alert to the users attention) to keep things intuitive for programmers moving forward.
In theory these could be defined as events - so you could use that.
This is very subjective, but here are a few suggestions:
Output
LogEntry
In web design, the term "admonition" is sometimes used for a block of text that can be an error, warning, or informational.
I'm wondering where I can find some information about exceptions that are thrown by the SharePoint object model. Unfortunately regarding this, the documentation on MSDN is not very useful as the documentation of many methods is lacking information about what exceptions might be thrown and in which case they will be thrown.
So where do you get your information about exceptions from?
Unfortunatly the best answer is it depends. A lot of times I will start by doing a Google search of the error text.
If you are looking for a generic list of possible execptions thrown by SharePoint you will be very disappointed. There is nothing out there that I know of.
Over time you will recognize patterns that will help you.
For example one common exception I have seen is "FileNotFound". This usally occurs if I ask SharePoint to open an SPWeb or a SPList that does not exist.
I've developed a "Proof of Concept" application that logs unhandled exceptions from an application to a bug-tracking system (in this case Team Foundation Server, but it could be ANY bug tracking system). A limitation of this idea is that I don't want duplicate Bug Items opened every time the same exception is thrown (for example, many users encounter the exception - it's still a single "bug").
My first attempt was to store the Exception Type, Message and Stack Trace as fields in the Bug Tracking System.The logging component would then do a query against the Bug "Store" to see if there is an open bug with the same information. (This example is .NET - but I would think the concept is platform independant).
The problem obviously is that these fields can be very large (particularly the stack trace) - and requires a "Full-Text" type of implementation to store them and the searching is very expensive.
I was wondering what approaches have been defined for this problem. I had heard that FogBugz for example had such a feature for automated bug tracking, and was curious how it was implemented.
If you have the stack trace, you could find the last statement in the stack trace and compare it with the ones already logged. If the symbols were included, you'd also get the line number. So, now you have two things for comparison, the actual error number and the statement that failed and possibly the actual line number. If something has already been logged with all of those, then it's more than likely (not 100%, of course) the same issue.
In fact, you could probably parse the stack trace with the "at" word, as each line in the stack trace begins with "at". So, look for the last "at", get that line, compare it with the same last "at" line of the stored stack traces, and you might actually have something.
HTH!
You could create a checksum hash of the stack trace and store that as an indexed column. That way the query to the Bug Store would be pretty fast to avoid duplicates on insert.
You could look at the source code for one of the existing open-source solutions that aggregate exceptions.
For example: https://github.com/getsentry/sentry/tree/master/src/sentry
It is not a simple problem and there are complex heuristics (e.g. same exception reported different ways on different browsers, e.g. exceptions caused by browser extensions are common and are rarely important).