I'm trying to avoid getting warning messages when copying lots of walls with Revit 2018 API, for instance when some are overlapping. For that, I'm implementing the FailureHandler class as documented on the Building Coder, slightly adapted for Python, as also documented here.
Now, on a simple test case, copying a few walls that don't raise any warning/errors (I tried without taking care of failures, just copying walls, it works perfectly without any error), when I do implement and use the FailureHandler class, all my wall creation Transactions are RolledBack. They shouldn't, since there are no warnings! I reduced the FailureHandler implementation to its strict minimum to try to understand the behavior, but it keeps rolling back the transactions...
Here is my implementation for FailureHandler:
class FailureHandler(IFailuresPreprocessor):
def __init__(self):
self.ErrorMessage = ""
self.ErrorSeverity = ""
def PreprocessFailures(self, failuresAccessor):
return FailureProcessingResult.ProceedWithCommit
As you can see, I'd expect it just to proceed with the Transaction. But it rolls back.
The main routine:
wallTransaction = Transaction(doc,"creating new walls")
wallTransaction.Start()
failureHandlingOptions = wallTransaction.GetFailureHandlingOptions()
failureHandler = FailureHandler()
failureHandlingOptions.SetFailuresPreprocessor(failureHandler)
failureHandlingOptions.SetClearAfterRollback(True)
wallTransaction.SetFailureHandlingOptions(failureHandlingOptions)
newWall = Wall.Create(doc, geoLine, wallTypeId, levId, wallHeight, 0, False, True)
wallTransaction.Commit()
print wallTransaction.GetStatus()
Again, without all these failureHandler considerations, this routine creates the walls without warning/errors.
can somebody explain me why it rolls back? Wouldn't FailureProcessingResult.ProceedWithCommit just imply that the transaction should commit?
Thanks a lot!
Please explore The Building Coder topic group on Detecting and Handling Dialogues and Failures, especially the last discussion on Gathering and Returning Failure Information.
Related
I'm learning Python and I built a TicTacToe game. I'm now writing unit tests and improving my code. I was advised to make a while loop in my turn function and use the following bools, but I don't know why. I can see some reason to why this makes sense, but because of how new I am I couldn't even explain why it makes any sense to me. Can someone explain why this would make more sense than another combination of bools?
print(TTTGame.player + "'s TURN!")
print('pick a number 1 through 9')
position = int(input()) - 1
valid = False<<<<<<<<<<<<<<<<<<<<<<False
while not valid:<<<<<<<<<<<<<<<<<<<<<<<<<<<Not
try:
position = int(input('pick a spot'))
except (ValueError):
print('pick a number 1 though 9')
x = TTTGame.board[position]
if x == '-':
TTTGame.board[position] = TTTGame.player
else:
print('Cant Go There!')
TTTGame.board[position] = TTTGame.player
valid = True<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<True
Based on the code you provided I would have to try to work backwards to create a worse example, and without seeing what you did initially that may cause more confusion than it would help. A fair number of the things I mention are subjective to each developer, but the principles tend to be commonly agreed upon to some degree (especially in python).
That being said I will try to explain why this pattern is optimal:
Intuitiveness: When reading this code and looking from the outside I can tell that you are doing validation, primarily this is because just reading the code as plain English I see you are assuming the input to be invalid (valid == False) and won't leave the loop until it has been validated (valid == True). Even without the additional context of knowing it was a tic-tac-toe game I can immediately tell by reading the code this is the case.
Speed: In terms of how long it takes to complete this function, it will terminate as soon as the input is valid as opposed to some other ways of doing this that require more than one check.
Pythonic: In the python community there is an emphasis on 'pythonic' code, basically this means that there are ways of doing things that are common across various types of python code. In this case there are many patterns of validating input (using for loops, terminating conditions, seperate functions etc.), but again because of how this is written I would be able to debug it without having to necessarily know your whole codebase. This also means if you see a similar chunk of code in a large project, you are more likely to recognize what's happening. It follows the principles of the zen of python in that you are being explicit, while maintaining readability, errors are not passing silently and you are keeping it simple.
I'm trying to get a better feel for how to handle error states in Haskell, since there seem to be a lot of ways to do it. Ideally, my data structures would make any invalid inputs unrepresentable, but despite considerable effort to the contrary, I still occasionally end up working with data where the type system can allow invalid states. As an example, let's consider that my program input is the training results for a neural network. In order for math to work, each matrix needs to have the correct bounds, and that's not (really) representable by the type system. If data is invalid, there's really nothing the application can do but halt any further processing and notify someone of the problem (so it's not recoverable). What's the best way to handle this in Haskell? It seems like I could:
1) Use error or other partial functions when processing my data. My understanding is this should only be used to represent a bug in the code. So it would have to be coupled with some sort of validation at the point that I load the data, and any point "after" that check I just assume that the data is in a valid format. This feels imperative to me, and doesn't seem to fit very well with lazy, declarative code.
2) Throw an exception when processing the data using Control.Exception.throw, and then catch it at the top level where I can alert someone. Contrary to error, I believe this doesn't indicate a bug in the program, so perhaps there wouldn't be verification when I load the data beyond what can be represented through the type system? The presence or absence of an exception when processing the data would define the verification.
3) Lift any data processing that could fail into the IO monad and use Control.Exception.throwIO.
4) Lift any data processing that could fail into the IO monad and use fail (I've read that using fail frowned on by the community?)
5) Return an Either or something similar, and let that bubble up through all your logic. I've definitely had some cases where composing Eithers becomes (to me) exceedingly impractical.
6) Use Control.Monad.Exception, which I only marginally understand, but seems to involve lifting any data processing that could fail into some exceptional monad, that I think is supposed to be more easily composeable than Either?
and I'm not even sure that's all the options. Is there an approach to this problem that's generally accepted by the community, or is this really an opinionated topic?
I've worked with tkinter a bit of a time now.
There are two ways for configuration or at at least I just know two:
1: frame.config(bg='#123456')
2: frame["bg"] = '#123456'
I use the latter more often. Only if there are more things to be done at the same time, the second seems useful for me.
Recently I was wondering if one of them is 'better' like faster or has any other advantage.
I don't think it's a crucially important Question but maybe someone knows it.
Studying the tkinter code base, we find the following:
class Frame(Widget):
# Other code here
class Widget(BaseWidget, Pack, Place, Grid):
pass
class BaseWidget(Misc):
# other code here
class Misc:
# various code
def __setitem__(self, key, value):
self.configure({key: value})
Therefore, the two methods are actually equivalent. The line
frame['bg'] = '#123456'
is interpreted as frame.__setitem__('bg','#123456'), which after passing through the inheritance chain finds itself on the internal class Misc which simply passes it to the configure method. As far as your question about efficiency is concerned, the first method is probably slightly faster because it doesn't need to be interpreted as much, but the speed difference is too little to be overly concerned with.
I came across the following post however had a hard time understanding it - self message(non recursive) vs self recursive message.
I also came across the example at http://www.zicomi.com/messageRecursion.jsp hoping a real world scenario would help but that confused me further. Why would you need a recursive message when the order has been passed to the kitchen and chef. I would thought all you would have needed is a self message i.e. the chef completing the order and then passing it to the waiter.
The chef example is arguably "wrong" in what it shows and describes.
Simply put, a message to self just means that the method that is to be invoked next happens to be in the same class of objects. E.g. a call to SavingsAccount.withdraw(anAmount) may call SavingsAccount.getBalance() to determine if there are enough funds to continue with the withdrawal.
A recursive call is a special case of a call to self in that it is to the same method (with a different state, so that it can eventually return out of the recursive calls). Some problems lend themselves to this solution. An example is factorial (see Factorial). To do a factorial without recursion would be impossible (at least for all cases but the simplest, due to the volume of inline code needed). If you look in the factorial code example, you'll see that the call is changed by one each time (factorial(n-1)) and stops when n reaches zero. Trying to do this inline for a value like 1,000,000 would not be feasible without recursion.
If you look at the call stack of a program and treat each return pointer as a token, what kind of automata is needed to build a recognizer for the valid states of the program?
As a corollary, what kind of automata is needed to build a recognizer for a specific bug state?
(Note: I'm only looking at the info that could be had from this function.)
My thought is that if these form regular languages than some interesting tools could be built around that. E.g. given a set of crash/failure dumps, automatically group them and generate a recognizer to identify new instances of know bugs.
Note: I'm not suggesting this as a diagnostic tool but as a data management tool for turning a pile of crash reports into something more useful.
"These 54 crashes seem related, as do those 42."
"These new crashes seem unrelated to anything before date X."
etc.
It would seem that I've not been clear about what I'm thinking of accomplishing, so here's an example:
Say you have a program that has three bugs in it.
Two bugs that cause invalid args to be passed to a single function tripping the same sanity check.
A function that if given a (valid) corner case goes into an infinite recursion.
Also as that when the program crashes (failed assert, uncaught exception, seg-V, stack overflow, etc.) it grabs a stack trace, extracts the call sites on it and ships them to a QA reporting server. (I'm assuming that only that information is extracted because 1, it's easy to get with a one time per project cost and 2, it has a simple, definite meaning that can be used without any special knowledge about the program)
What I'm proposing would be a tool that would attempt to classify incoming reports as connected to one of the known bugs (or as a new bug).
The simplest thing would be to assume that one failure site is one bug, but in the first example, two bugs get detected in the same place. The next easiest thing would be to require the entire stack to match, but again, this doesn't work in cases like the second example where you have multiple pieces of (valid) valid code that can trip the same bug.
The return pointer on the stack is just a pointer to memory. In theory if you look at the call stack of a program that just makes one function call, the return pointer (for that one function) can have different value for every execution of the program. How would you analyze that?
In theory you could read through a core dump using a map file. But doing so is extremely platform and compiler specific. You would not be able to create a general tool for doing this with any program. Read your compiler's documentation to see if it includes any tools for doing postmortem analysis.
If your program is decorated with assert statements, then each assert statement defines a valid state. The program statements between the assertions define the valid state changes.
A program that crashes has violated enough assertions that something broken.
A program that's incorrect but "flaky" has violated at least one assertion but hasn't failed.
It's not at all clear what you're looking for. The valid states are -- sometimes -- hard to define but -- usually -- easy to represent as simple assert statements.
Since a crashed program has violated one or more assertions, a program with explicit, executable assertions, doesn't need an crash debugging. It will simply fail an assert statement and die visibly.
If you don't want to put in assert statements then it's essentially impossible to know what state should have been true and which (never-actually-stated) assertion was violated.
Unwinding the call stack to work out the position and the nesting is trivial. But it's not clear what that shows. It tells you what broke, but not what other things lead to the breakage. That would require guessing what assertions where supposed to have been true, which requires deep knowledge of the design.
Edit.
"seem related" and "seem unrelated" are undefinable without recourse to the actual design of the actual application and the actual assertions that should be true in each stack frame.
If you don't know the assertions that should be true, all you have is a random puddle of variables. What can you claim about "related" given a random pile of values?
Crash 1: a = 2, b = 3, c = 4
Crash 2: a = 3, b = 4, c = 5
Related? Unrelated? How can you classify these without knowing everything about the code? If you know everything about the code, you can formulate standard assert-statement conditions that should have been true. And then you know what the actual crash is.