Standardized Error Classification & Handling - handle

I need to standardize on how I classify and handle errors/exceptions 'gracefully'.
I currently use a process by which I report the errors to a function passing an error-number, severity-code, location-info and extra-info-string. This function returns boolean true if the error is fatal and the app should die, false otherwise. As part of it's process, apart from visual-feedback to the user, the function also log-to-file errors of above some severity-level.
Error-number indexes an array of strings explaining the type of error, e.g.:'File access','User Input','Thread-creation','Network access', etc. Severity-code is binary OR of 0,1,2 or 4, 0=informative, 1=user_retry, 2=cannot_complete, 4=cannot_continue. Location-info is module & function, and Extra-info is parameter- and local variable values.
I want to make this into a standard way of error-handling that I can put in a library and re-use in all my apps. I mainly use C/C++ on Linux, but would want to use the resultant library with other languages/platforms as well.
An idea is to extend the error-type
array to indicate some default
behavior for a given severity-level,
but should this then become the
action taken and give no options to
the user?
Or: should such extension be a
sub-array of options that the user
need to pick from? The problem with
this is that the options would of
necessity be generalized
programming-related options that may
very-well completely baffle an
end-user.
Or: should each app that uses the
error-lib routine pass along its own
array of either errors or default
behaviors - but this will defeat the
purpose of the library...
Or: should the severity-levels be
handled in each app?
Or: what do you suggest? How do you handle errors? How can I improve this?

How you handle the errors really depends upon the application.A Web application has a different Error-Catching mechanism than A Desktop Application, and both of those differ drastically to an asynchronous messaging system.
That being said the a common practice in error handling is to handle it at the lowest possible level where it can be dealt with. This usually means the Application Layer or the GUI.
I like the severity levels. Perhaps you can have a pluggable Error-collection library with different error output providers and severity level provider.
Output providers could include things like a logginProvider and IgnoreErrorsProvider.
Severity providers would probably be something implemented by each project since severity levels are usually determined by that type of project in which it occurs. (For example, network connection issues are more severe for a banking application than for a contact management system).

Related

How to implement Commands and Events for complex form using Event Sourcing?

I would like to implement CQRS and ES using Axon framework
I've got a pretty complex HTML form which represents recruitment process with six steps.
ES would be helpful to generate historical statistics for selected dates and track changes in form.
Admin can always perform several operations:
assign person responsible for each step
provide notes for each step
accept or reject candidate on every step
turn on/off SMS or email notifications
assign tags
Form update (difference only) is sent from UI application to backend.
Assuming I want to make changes only for servers side application, question is what should be a Command and what should be an Event, I consider three options:
Form patch is a Command which generates Form Update Event
Drawback of this solution is that each event handler needs to check if changes in form refers to this handler ex. if email about rejection should be sent
Form patch is a Command which generates several Events ex:. Interviewer Assigned, Notifications Turned Off, Rejected on technical interview
Drawback of this solution is that some events could be generated and other will not because of breaking constraints ex: Notifications Turned Off will succeed but Interviewer Assigned will fail due to assigning unauthorized user. Maybe I should check all constraints before commands generation ?
Form patch is converted to several Commands ex: Assign Interviewer, Turn Off Notifications and each command generates event ex: Interviewer Assigned, Notifications Turned Off
Drawback of this solution is that some commands can fail ex: Assign Interviewer can fail due to assigning unauthorized user. This will end up with inconsistent state because some events would be stored in repository, some will not. Maybe I should check all constraints before commands generation ?
The question I would call your attention to: are you creating an authority for the information you store, or are you just tracking information from the outside world?
Udi Dahan wrote Race Conditions Don't Exist; raising this interesting point
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If you have an unauthorized user in your system, is it really critical to the business that they be authorized before they are assigned responsibility for a particular step? Can the system really tell that the "fault" is that the responsibility was assigned to the wrong user, rather than that the user is wrongly not authorized?
Greg Young talks about exception reports in warehouse systems, noting that the responsibility of the model in that case is not to prevent data changes, but to report when a data change has produced an inconsistent state.
What's the cost to the business if you update the data anyway?
If the semantics of the message is that a Decision Has Been Made, or that Something In The Real World Has Changed, then your model shouldn't be trying to block that information from being recorded.
FormUpdated isn't a particularly satisfactory event, for the reason you mention; you have to do a bunch of extra work to cast it in domain specific terms. Given a choice, you'd prefer to do that once. It's reasonable to think in terms of translating events from domain agnostic forms to domain specific forms as you go along.
HttpRequestReceived ->
FormSubmitted ->
InterviewerAssigned
where the intermediate representations are short lived.
I can see one big drawback of the first option. One of the biggest advantage of CQRS/ES with Axon is scalability. We can add new features without worring about regression bugs. Adding new feature is the result of defining new commands, event and handlers for both of them. None of them should not iterfere with ones existing in our system.
FormUpdate as a command require adding extra logic in one of the handler. Adding new attribute to patch and in consequence to command will cause changes in current logic. Scalability is no longer advantage in that case.
VoiceOfUnreason is giving a very good explanation what you should think about when starting with such a system, so definitely take a look at his answer.
The only thing I'd like to add, is that I'd suggest you take the third option.
With the examples you gave, the more generic commands/events don't tell that much about what's happening in your domain. The more granular events far better explain what exactly has happened, as the event message its name already points it out.
Pulling Axon Framework in to the loop, I can also add a couple of pointers.
From a command message perspective, it's safe to just take a route and not over think it to much. The framework quite easily allows you to adjust the command structure later on. In Axon Framework trainings it is typically suggested to let a command message take the form of a specific action you're performing. So 'assigning a person to a step would typically be a AssignPersonToStepCommand, as that is the exact action you'd like the system to perform.
From events it's typically a bit nastier to decide later on that you want fine grained or generic events. This follows from doing Event Sourcing. Since the events are your source of truth, you'll thus be required to deal with all forms of events you've got in your system.
Due to this I'd argue that the weight of your decision should lie with how fine grained your events become. To loop back to your question: in the example you give, I'd say option 3 would fit best.

Are there any benefits using ESAPI's number validations?

I have been asked to add some input validation in all of our REST endpoints. We have two custom validation constraints in our system, wrapping around the ESAPI library; one for String, wrapping #isValidInput and one for Long, wrapping #isValidNumber.
ESAPI's #isValidNumber seems to simply check the number's min and max value (something that I can simply do with JSR-303 #Min / #Max). Are there any added benefits for using the ESAPI library or can I simply remove the custom constraint and add the bean validation annotations?
I do agree that we need the String canonicalization provided by ESAPI, but for the numbers I'm a bit sceptical.
Are there any added benefits for using the ESAPI library or can I
simply remove the custom constraint and add the bean validation
annotations?
In short, yes. If you notice the last parameter in the call to #isValidInput you linked is whether to turn canonicalization on or off. If your application turned it off, the only benefit ESAPI will give you is to outsource your validation regex into validation.properties, where if a prod issue arose, you could change the value and restart the server, saving an application build and deployment. JSR-303 will require a recompile and a deployment.
If however you've left canonicalization on, ESAPI provides one thing that I haven't found in another Java security library to date, which is the ability to detect mixed encoding and multiple encoding on a given input string. From a forensics and incident response perspective, this is ultra-useful as we can tell if our web application is being attacked in real-time, as well as record and audit information about the user performing it.
^^^You seem to know all of this by your last statement. I haven't seen a web container that didn't pass in Strings for numbers. My guess is that you're pulling it from request.getParameter("foo"); which means its a string and you still want the defense provided by canonicalize. Even if it might get caught by a parse exception on the conversion to Integer or Long, there's also the risk of overflow or underflow which could perturb your application in a different way.

DDD: Using Value Objects inside controllers?

When you receive arguments in string format from the UI inside you controller, do you pass strings to application service (or to command) directly ?
Or, do you create value objects from the strings inside the controller ?
new Command(new SomeId("id"), Weight.create("80 kg"), new Date())
or
new Command("id", "80 kg", new Date())
new Command("id", "80", "kg", new Date())
Maybe it is not important, but it bothers me.
The question is, should we couple value objects from the domain to (inside) the controller ?
Imagine you don't have the web between you application layer and the presentation layer (like android activity or swing), would you push the use of value objects in the UI ?
Another thing, do you serialize/unserialize value objects into/from string like this ?
Weight weight = Weight.create("80 kg");
weight.getValue().equals(80.0);
weight.getUnit().equals(Unit.KILOGRAMS);
weight.toString().equals("80 kg");
In the case of passing strings into commands, I would rather pass "80 kg" instead of "80" and "kg".
Sorry if the question is not relevant or funny.
Thank you.
UPDATE
I came across that post while I was searching information about a totally different topic : Value Objects in CQRS - where to use
They seem to prefer primitives or DTOs, and keep VOs inside the domain.
I've also taken a look at the book of V. Vernon (Implementing DDD), and it talks about (exactly -_-) that in chapter 14 (p. 522)
I've noticed he's using commands without any DTOs.
someCommand.setId("id");
someCommand.setWeightValue("80");
someCommand.setWeightUnit("kg");
someCommand.setOtherWeight("80 kg");
someCommand.setDate("17/03/2015 17:28:35");
someCommand.setUserName("...");
someCommand.setUserAttribute("...");
someCommand.setUserOtherAttributePartA("...");
someCommand.setUserOtherAttributePartB("...");
It is the command object that would be mapped by the controller. Value objects initialization would appeare in the command handler method, and they would throw something in case of bad value (self validation in initialization).
I think I'm starting to be less bothered, but some other opinions would be welcomed.
As an introduction, this is highly opinionated and I'm sure everyone has different ideas on how it should work. But my endeavor here is to outline a strategy with some good reasons behind it so you can make your own evaluation.
Pass Strings or Parse?
My personal preference here is to parse everything in the Controller and send down the results to the Service. There are two main phases to this approach, each of which can spit back error conditions:
1. Attempt to Parse
When a bunch of strings come in from the UI, I think it makes sense to attempt to interpret them immediately. For easy targets like ints and bools, these conversions are trivial and model binders for many web frameworks handle them automatically.
For more complex objects like custom classes, it still makes sense to handle it in this location so that all parsing occurs in the same location. If you're in a framework which provides model binding, much of this parsing is probably done automatically; if not - or you're assembling a more complex object to be sent to a service - you can do it manually in the Controller.
Failure Condition
When parsing fails ("hello" is entered in an int field or 7 is entered for a bool) it's pretty easy to send feedback to the user before you even have to call the service.
2. Validate and Commit
Even though parsing has succeeded, there's still the necessity to validate that the entry is legitimate and then commit it. I prefer to handle validation in the service level immediately prior to committing. This leaves the Controller responsible for parsing and makes it very clear in the code that validation is occurring for every piece of data that gets committed.
In doing this, we can eliminate an ancillary responsibility from the Service layer. There's no need to make it parse objects - its single purpose is to commit information.
Failure Condition
When validation fails (someone enters an address on the moon, or enters a date of birth 300 years in the past), the failure should be reported back up to the caller (Controller, in this case). While the user probably makes no distinction between failure to parse and failure to validate, it's an important difference for the software.
Push Value Objects to UI?
I would accept parsed objects as far up the stack as possible, every time. If you can have someone else's framework handle that bit of transformation, why not do it? Additionally, the closer to the UI that the objects can live, the easier it is to give good, quick feedback to the user about what they're doing.
A Note on Coupling
Overall, pushing objects up the stack does result in greater coupling. However, writing software for a particular domain does involve being tightly coupled to that domain, whatever it is. If a few more components are tightly coupled to some concepts that are ubiquitous throughout the domain - or at least to the API touchpoints of the service being called - I don't see any real reduction in architectural integrity or flexibility occurring.
Parse One Big String or Components?
In general, it tends to be easiest to just pass the entire string into the Parse() method to get sorted through. Take your example of "80 kg":
"80 kg" and "120 lbs" may both be valid weight inputs
If you're passing in strings to a Parse() method, it's probably doing some fairly heavy lifting anyway. Expecting it to split a string based on a space is not overbearing.
It's far easier to call Weight.create(inputString) than it is to split inputString by " ", then call Weight.create(split[0], split[1]).
It's easier to maintain a single-string-input Parse() function as well. If some new requirement comes in that the Weight class has to support pounds and ounces, a new valid input may be "120 lbs 6 oz". If you're splitting up the input, you now need four arguments. Whereas if it's entirely encapsulated within the Parse() logic, there's no burden to outside consumers. This makes the code more extensible and flexible.
The difference between a DTO and a VO is that a DTO has no behavior, it's a simple container designed to pass data around from component to component. Besides, you rarely need to compare two DTO's and they are generally transient.
A Value Object can have behavior. Two VO's are compared by value rather than reference, which means for instance two Address value objects with the same data but that are different object instances will be equal. This is useful because they are generally persisted in one form or another and there are more occasions to compare them.
It turns out that in a DDD application, VO's will be declared and used in your Domain layer more often than not since they belong to the domain's Ubiquitous Language and because of separation of concerns. They can sometimes be manipulated in the Application layer but typically won't be sent between the UI layer and Application. We use DTO's for that instead.
Of course, this is debatable and depends a lot on the layers you choose to build your application out of. There might be cases when crunching your layered architecture down to 2 layers will be beneficial, and when using business objects directly in the UI won't be that bad.

Can TDD be a valid alternative to overkill data validation?

Consider these two data validation scenarios:
Check everything everywhere
Make sure that every method that takes one or more arguments actually checks them to ensure that they're syntactically valid.
Pros
Very fine check granularity.
If the code that is being written is for some kind of library we make sure to limit the damage that can be done if the developers that will be using it fail to provide valid data.
Cons
It's costly to always perform checks that most of the time shouldn't be needed.
It's still possible to forget to add a check every now and then.
More code is being written and hence in need of maintenance.
Make use of TDD goodness
Validate data only when it enters your code from the external world.
To make sure that internally data will be always syntactically correct, create tests that check every method that returns a value. To make sure that if valid data enters, valid data exits.
The pros and the cons are practically switched with the ones from the former approach.
As of now I'm using the first approach, but since I'm employing test driven development I thought that maybe I could go with the second one.
The advantages are clear, still, I wonder if it's as secure as the first method.
It sounds like the first method is contract driven, and one aspect of that is that you also need to verify that what you return from any public interface meets the contract.
But, I think that both approaches are valid, but very different.
TDD only partially deals with the public interface, in that it should check that every input is properly validated, unfortunately, unless you have all your validation in separate functions, to adequately test, it becomes very difficult to ensure that this function of 3 or 4 parameters is being properly tested for validity. The number of tests you have to write is quite high, in either approach.
If you are using a library, then in every function that can be called directly from the outside (outside being outside the library) then you will need to check that every input is valid, and that invalid input is handled as per the contract, either returning a null or throwing an exception. But, it must be in agreement with the documentation.
Once you have verified it, then there is no reason to force the verification on private functions as those can only be called from within the library, and you should be verifying that you are only dealing with valid data.
Lots of tests will be needed, regardless, unfortunately. All these tests do is to ensure that you don't have any surprise problems, but that should generally help justify the cost of writing and maintaining them.
As to your question, if your tests are really well written, and you ensure that all validity checks are done completely, then it should be as secure, but the risk is that if you believe it is secure and you have poorly written tests then it will actually be worse than no tests, as there is an assumption that your tests are well-written.
I would use both methods, until you know your tests are well-written then just go with TDD.
My opinion is that in the first scenario, two of your Cons outweigh everything else:
It's costly to always perform checks
that most of the time shouldn't be
needed.
More code is being written and hence
in need of maintenance.
Also, technically TDD has no bearing on this question, because it is not a testing technique. More later...
To mitigate the Cons I would strongly advocate (as I think you say) splitting the code into an outside and an inside: The outside is where all the validation occurs. Hopefully this is but a thin wrapper around the inside, to prevent GIGO. Once inside, data never needs to be validated again.
As for TDD, I would strongly advocate (as you are now doing) employing it to develop your code, with the added benefit of leaving a trail of tests that become a regression test suite. Now you will naturally develop your outside code to perform robust validation, with the promise of easily adding any checks that you might initially forget. Your inside code can be developed assuming it will only handle valid data, but TDD will still give you the confidence that it will function to spec.
I'm saying that I would go with the second approach, as I've described, independently of whether I'm developing with TDD, or not (but TDD is always my first choice).
The advantages are clear, still, I wonder if it's as secure as the first method.
This completely depends on how well you test it.
This could be just as secure, if the following two criteria are met:
Every publicly exposed means of adding data to the system are validated completely
Every internal method that translates data is completely and adequately tested
However, I question that this would be easier or that it would require less code. The amount of code required to check every public entry point is going to be very similar to the amount of code required to validate each method. You're going to need more checks in the entry points, since they'll have to check things that might otherwise be checked internally.
For the second method, you need two good sets of tests. You must not only check that
To make sure that if valid data
enters, valid data exits.
You must also check that if Invalid data enters, an exception is thrown. I suppose you still have to validate data and kick out if you have invalid data. This is really the only way if you don't want pesky ArgumentNullException s or other cryptic errors in your production application. However TDD can really toughen up the quality of all that checking (especially with Fuzz Testing).
One item is missing from your list of Pros and Cons and that is something important enough to make unit testing a much more safer method than maniac parameters checking.
You just have to consider the When and the Where.
For unit testing the when and the where are:
when: at design time
where: in a dedicated source file outside of the application code
For overkill data checking they are:
when: at runtime
where: entangled in the application source code, typically using asserts.
That is the point: code covered by unit testing detects errors at design time when you run the tests, if you are the paranoid and schizofrenic kind of tester (the bests) you write tests designed to break whatever can be, checking each data boundary and perverse input. You also use code coverage tools to ensure every branch of every alternative is tested. You have no limit : tests lies in their own files and do not clutter application. Doesn't matter if you get ten times as many test lines than the actual application code, no run time penalty, no readability penalty.
On the other hand integrated overkill testing detects errors at runtime. In the worst-case it will detects errors on the user system, where you can do nothing about it (if even you ever heard of this error happening). Also even if you are the paranoid kind you will have to limit your testing. Assertion just can't be 90 percents of the application code. It raise readability issues, maintenance, often heavy performances penalty. Where will you stop then: only checking parameters for external input ? Checking every possible or impossible inputs of inner functions ? Checking every loop invariant ? Also testing behavior when out of flow data (globals, system files, etc) is changed ? You must also be conscious that assertion code can also contain some bugs. What if the formula of an assertion perform a divide. You must ensure it will not lead by a DIVIDE-BY-ZERO error or such ?
Another problem is that in many cases you just don't know what can be done when an assertion failure. If you are at a real entry point you can return back something understandable for your user or the lib user... when you are checking innner functions

What is meant by the term "hook" in programming?

I recently heard the term "hook" while talking to some people about a program I was writing. I'm unsure exactly what this term implies although I inferred from the conversation that a hook is a type of function. I searched for a definition but was unable to find a good answer. Would someone be able to give me an idea of what this term generally means and perhaps a small example to illustrate the definition?
Essentially it's a place in code that allows you to tap in to a module to either provide different behavior or to react when something happens.
A hook is functionality provided by software for users of that software to have their own code called under certain circumstances. That code can augment or replace the current code.
In the olden days when computers were truly personal and viruses were less prevalent (I'm talking the '80's), it was as simple as patching the operating system software itself to call your code. I remember writing an extension to the Applesoft BASIC language on the Apple II which simply hooked my code into the BASIC interpreter by injecting a call to my code before any of the line was processed.
Some computers had pre-designed hooks, one example being the I/O stream on the Apple II. It used such a hook to inject the whole disk sub-system (Apple II ROMs were originally built in the days where cassettes were the primary storage medium for PCs). You controlled the disks by printing the ASCII code 4 (CTRL-D) followed by the command you wanted to execute then a CR, and it was intercepted by the disk sub-system, which had hooked itself into the Apple ROM print routines.
So for example, the lines:
PRINT CHR(4);"CATALOG"
PRINT CHR(4);"IN#6"
would list the disk contents then re-initialize the machine. This allowed such tricks as protecting your BASIC programs by setting the first line as:
123 REM XIN#6
then using POKE to insert the CTRL-D character in where the X was. Then, anyone trying to list your source would send the re-initialize sequence through the output routines where the disk sub-system would detect it.
That's often the sort of trickery we had to resort to, to get the behavior we wanted.
Nowadays, with the operating system more secure, it provides facilities for hooks itself, since you're no longer supposed to modify the operating system "in-flight" or on the disk.
They've been around for a long time. Mainframes had them (called exits) and a great deal of mainframe software uses those facilities even now. For example, the free source code control system that comes with z/OS (called SCLM) allows you to entirely replace the security subsystem by simply placing your own code in the exit.
In a generic sense, a "hook" is something that will let you, a programmer, view and/or interact with and/or change something that's already going on in a system/program.
For example, the Drupal CMS provides developers with hooks that let them take additional action after a "content node" is created. If a developer doesn't implement a hook, the node is created per normal. If a developer implements a hook, they can have some additional code run whenever a node is created. This code could do anything, including rolling back and/or altering the original action. It could also do something unrelated to the node creation entirely.
A callback could be thought of as a specific kind of hook. By implementing callback functionality into a system, that system is letting you call some additional code after an action has completed. However, hooking (as a generic term) is not limited to callbacks.
Another example. Sometimes Web Developers will refer to class names and/or IDs on elements as hooks. That's because by placing the ID/class name on an element, they can then use Javascript to modify that element, or "hook in" to the page document. (this is stretching the meaning, but it is commonly used and worth mentioning)
Simple said:
A hook is a means of executing custom code (function) either before, after, or instead of existing code. For example, a function may be written to "hook" into the login process in order to execute a Captcha function before continuing on to the normal login process.
Hooks are a category of function that allows base code to call extension code. This can be useful in situations in which a core developer wants to offer extensibility without exposing their code.
One usage of hooks is in video game mod development. A game may not allow mod developers to extend base functionality, but hooks can be added by core mod library developers. With these hooks, independent developers can have their custom code called upon any desired event, such as game loading, inventory updates, entity interactions, etc.
A common method of implementation is to give a function an empty list of callbacks, then expose the ability to extend the list of callbacks. The base code will always call the function at the same and proper time but, with an empty callback list, the function does nothing. This is by design.
A third party, then, has the opportunity to write additional code and add their new callback to the hook's callback list. With nothing more than a reference of available hooks, they have extended functionality at minimal risk to the base system.
Hooks don't allow developers to do anything that can't be done with other structures and interfaces. They are a choice to be made with consideration to the task and users (third-party developers).
For clarification: a hook allows the extension and may be implemented using callbacks. Callbacks are generally nothing more than a function pointer; the computed address of a function. There appears to be confusion in other answers/comments.
Hooking in programming is a technique employing so-called hooks to make a chain of procedures as an event handler.
Hook denotes a place in the code where you dispatch an event of certain type, and if this event was registered before with a proper function to call back, then it would be handled by this registered function, otherwise nothing happens.
hooks can be executed when some condition is encountered. e.g. some variable changes or some action is called or some event happens. hooks can enter in the process and change things or react upon changes.
Oftentimes hooking refers to Win32 message hooking or the Linux/OSX equivalents, but more generically hooking is simply notifying another object/window/program/etc that you want to be notified when a specified action happens. For instance: Having all windows on the system notify you as they are about to close.
As a general rule, hooking is somewhat hazardous since doing it without understanding how it affects the system can lead to instability or at the very leas unexpected behaviour. It can also be VERY useful in certain circumstances, thought. For instance: FRAPS uses it to determine which windows it should show it's FPS counter on.
A chain of hooks is a set of functions in which each function calls the next. What is significant about a chain of hooks is that a programmer can add another function to the chain at run time. One way to do this is to look for a known location where the address of the first function in a chain is kept. You then save the value of that function pointer and overwrite the value at the initial address with the address of the function you wish to insert into the hook chain. The function then gets called, does its business and calls the next function in the chain (unless you decide otherwise). Naturally, there are a number of other ways to create a chain of hooks, from writing directly to memory to using the metaprogramming facilities of languages like Ruby or Python.
An example of a chain of hooks is the way that an MS Windows application processes messages. Each function in the processing chain either processes a message or sends it to the next function in the chain.
In the Drupal content management system, 'hook' has a relatively specific meaning. When an internal event occurs (like content creation or user login, for example), modules can respond to the event by implementing a special "hook" function. This is done via naming convention -- [your-plugin-name]_user_login() for the User Login event, for example.
Because of this convention, the underlying events are referred to as "hooks" and appear with names like "hook_user_login" and "hook_user_authenticate()" in Drupal's API documentation.
Many answers, but no examples, so adding a dummy one: the following complicated_func offers two hooks to modify its behavior
from typing import List, Callable
def complicated_func(
lst: List[int], hook_modify_element: Callable[[int], int], hook_if_negative=None
) -> int:
res = sum(hook_modify_element(x) for x in lst)
if res < 0 and hook_if_negative is not None:
print("Returning negative hook")
return hook_if_negative
return res
def my_hook_func(x: int) -> int:
return x * 2
if __name__ == "__main__":
res = complicated_func(
lst=[1, 2, -10, 4],
hook_modify_element=my_hook_func,
hook_if_negative=0,
)
print(res)
A function that allows you to supply another function rather than merely a value as an argument, in essence extending it.
In VERY short, you can change the code of an API call such as MessageBox to where it does a different function edited by you (globally will work system wide, locally will work process wide).

Resources