We recently started using Behave (github link) for BDD of a new python web service.
Question
Is there any way we can get detailed info about the failure cause as tests fails? They throw AssertionError, but they never show what exactly went wrong. For example the expected value and the actual value that went into the assert.
We have been trying to find an existing feature like this, but I guess it does not exist. Naturally, a good answer to this question would be hints and tips on how to achieve this behavior by modifying the source code, and whether this feature exists in other, similar BDD frameworks, like jBehave, NBehave or Cucumber?
Example
Today, when a test fails, the output says:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
Traceback (most recent call last):
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1037, in run
match.run(runner.context)
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1430, in run
self.func(context, *args, **kwargs)
File "features\steps\login.py", line 20, in step_impl
assert context.response.status == int(status)
AssertionError
Captured stdout:
api.new_session
api.delete_session
Captured logging:
INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
...
I would like something more like:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
ASSERTION ERROR
Expected: 401
But got: 200
As you can see, the assertion in our generic step clearly prints
`assert context.response.status == int(status)`
but I would rather have a function like
assert(behave.equals, context.response.status, int(status)
or anything else that makes it possible to generate dynamic messages from the failed assertion.
Instead of using "raw assert" statements like in your example above, you can use another assertion provider, like PyHamcrest, who will provide you with desired details.
It will show you what went wrong, like:
# -- file:features/steps/my_steps.py
from hamcrest import assert_that, equal_to
...
assert_that(context.response.status, equal_to(int(status)))
See also:
http://jenisys.github.io/behave.example/intro.html#select-an-assertation-matcher-library
https://github.com/jenisys/behave.example
According to https://pythonhosted.org/behave/tutorial.html?highlight=debug,and This implementation is working for me.
A “debug on error/failure” functionality can easily be provided, by using the after_step() hook. The debugger is started when a step fails.
It is in general a good idea to enable this functionality only when needed (in interactive mode). This is accomplished in this example by using an environment variable.
# -- FILE: features/environment.py
# USE: BEHAVE_DEBUG_ON_ERROR=yes (to enable debug-on-error)
from distutils.util import strtobool as _bool
import os
BEHAVE_DEBUG_ON_ERROR = _bool(os.environ.get("BEHAVE_DEBUG_ON_ERROR", "no"))
def after_step(context, step):
if BEHAVE_DEBUG_ON_ERROR and step.status == "failed":
# -- ENTER DEBUGGER: Zoom in on failure location.
# NOTE: Use IPython debugger, same for pdb (basic python debugger).
import ipdb
ipdb.post_mortem(step.exc_traceback)
Don't forget you can always add an info message to an assert statement. For example:
assert output is expected, f'{output} is not {expected}'
I find that use pyhamcrest assertions yields much better error reporting than standard Python assertions.
Related
I have the following line of code which reads an image (which is fed into a POST request):
files = {"image": (image_path, open(image_path, "rb"))}
While trying to run this through mypy, it keeps throwing the following error:
Argument 1 to "open" has incompatible type "Optional[str]"; expected "Union[Union[str, bytes, PathLike[str], PathLike[bytes]], int]"
I've tried searching this, but I've not found a solution for similar problems.
Is there a different way to read filepaths in order to avoid these issues?
Not the correct answer but if you want to temporarily make it go away to move ahead:
# type: ignore
at the end of the erroring line should work.
I use Visual Studio Code for Python development.
In general, I want my IDE to break whenever an exception is raised, so I have the Raised Exceptions option checked under the Debug window:
However, there is a specific (encoding-related) exception I would like to ignore because it is raised thousands of times per second. I wrap it in a try-except blocks but, as expected, it breaks when the exception is thrown. I want to suppress this behavior but only for a specific error type.
Is there a way to do this in Visual Studio Code?
There are a couple settings available for configuring Python exception behavior when debugging in VScode. For the following examples, we assume that the checkboxes Raised Exceptions and Uncaught Exceptions are both checked (enabled). This is normally a situation where there the debugger is stopping for lots of unwanted exceptions.
IntroConsider the following example. For this code, the debugger will stop at the three places indicated.
try:
1 / 0 # FIRST STOP
except:
pass
func = lambda: 1 / 0 # SECOND STOP
try:
func() # THIRD STOP
except:
pass
Note that the exception that causes the "second stop" notated above doesn't occur at the point in the code where variable func is assigned. Indeed, being outside of a try-except block, such code would otherwise cause the program to exit. Instead, of course, the exception happens later on nested within the delayed invocation, which thankfully is protected by try. This distinction will become important for an example further below.
1. Line annotation The first technique allows you to prevent the debugger from breaking on exceptions at specific locations in your code. Put the special #IgnoreException token in a comment on the desired line or lines. See here for the RegEx forms of this tag which the debugger will recognize.
try:
1 / 0 ##IgnoreException
except:
pass
func = lambda: 1 / 0 ##IgnoreException
try:
func() ##IgnoreException
except:
pass
This is great for specialized, fine-grained control of where the debugger stops, but obviously as a more general solution, this approach will quickly get out-of-hand. Before moving on from this though, note that there is a way to globally enable or disable the #IgnoreException behavior in the debugger.
This feature is enabled by default when the debugger starts; if that's all you need you can skip this section. To globally disable #IgnoreException handling, you can either just insert the following snippet where it executes once at the start of your program or, if desired, instrument your code to programatically enable and disable #IgnoreException handling according to runtime conditions as needed. A try-except block prevents the code from crashing when it's not being debugged or if the debugger isn't installed.
# To enable or disable #IgnoreException handling when pydevd is active:
# 'True' - debugger won't stop on '#IgnoreException` lines (default)
# 'False' - the annotation are ignored, allowing the debugger to stop
try:
import pydevd
d = pydevd.GetGlobalDebugger()
d.ignore_exceptions_thrown_in_lines_with_ignore_exception = False
except:
pass
2. Context-aware ignoreMoving on to the second option, I'll reset back to the original code, without the line annotations. This time, by changing the value of a secret debugger switch, the debugger will only stop on exceptions which are raised outside of the caller's immediate context. This is the skip_on_exceptions_thrown_in_same_context debugger flag, and it is not enabled by default, so if you want this behavior, you have to explicitly turn it on (as shown):
try:
from pydevd import GetGlobalDebugger
GetGlobalDebugger().skip_on_exceptions_thrown_in_same_context = True
except:
pass
try:
1 / 0
except:
pass
func = lambda: 1 / 0
try:
func() # ONLY STOP
except:
pass
Now the debugger only stops one time, versus stopping on three raised exceptions previously, in the first example. And I know what you're thinking, now it makes more sense to combine these two approaches, since there will typically be far fewer points in most code that will need annotating with #IgnoreException.
3. Combine both techniquesSo here's the final version of the example which, even with both the RaisedExceptions and UncaughtExceptions options enabled, the VScode debugger runs all the way through without stopping:
try:
from pydevd import GetGlobalDebugger
GetGlobalDebugger().skip_on_exceptions_thrown_in_same_context = True
except: pass
try:
1 / 0
except:
pass
func = lambda: 1 / 0
try:
func() ##IgnoreException
except:
pass
# NO DEBUGGER STOPS...
I am trying to run the snmpget code sample in VB.NET available at:
https://github.com/lextm/sharpsnmplib/blob/master/Samples/VB.NET/snmpget/
When I try to run the code, I get the following exception:
http://i.stack.imgur.com/S5s9Z.png
The text on the exception indicates that the length of the string used to instantiate ObjectIdentifier is less than 2. However, this is not the case as seen in the watch window.
Could you let me know:
Any suggestions to fix this error. Am I not passing the command line args correctly?
Could you provide a sample command line argument string for SNMP v3?
Thank you for all the support!
The error message is clear enough that you cannot pass "0", or any other string that contains a single number. A valid OID requires at least two portion, such as "0.0".
https://sharpsnmplib.codeplex.com/wikipage?title=600001&referringTitle=KB
Command line tool usage can be found in KB6000001 and you can find other documentation on CodePlex too,
https://sharpsnmplib.codeplex.com/documentation
I am writing the plperl script function for my trigger execution. When INSERT / UPDATE happens ,my plperl script will run , in that I am dynamically forming some query based on event I receive. I wanted to print it in terminal when I do insert/update. But it does not happen. Tell me which way i can print it.?
Use the elog function to raise notices. You can also use it to raise full errors.
I am using Cucumber with RubyMine, and I have a scenario with steps that verify some special controls from a form (I am using cucumber for automation testing). The controls don't have anything to do with each other, and there is no reason for the steps to be skipped if one in front of them fails.
Does anyone know what configurations or commands should I use to run all the steps in a scenario even if they all fail?
I think the only way to achieve desired behavior (which is quite uncommon) is to define custom steps and catch exceptions in it yourself. According to cucumber wiki step is failed if it raises an error. Almost all default steps raise error if they can't find or interact with an element on the page. If you'll catch this exceptions the step will be marked as passed, but in rescue you can provide custom output. Also I recommend you to carefully define exceptions you want to catch, I think if you're Ok if selenium can't find an element on the page rescue only from ElementNotFound exceptions, don't catch all exceptions.
I've seen a lot of threads on the Web about people wanting to continue steps execution if one failed.
I've discussed with Cucumber developers: they think this is a bad idea: https://groups.google.com/forum/#!topic/cukes/xTqSyR1qvSc
Many times, scenarios can be reworked to avoid this need: scenarios must be split into several smaller and independent scenarios, or several checks can be aggregated into one, providing a more human scenario and a less script-like scenario.
But if you REALLY need this feature, like our project do, we've done a fork of Cucumber-JVM.
This fork let you annotate steps so that when they fail with a determined exception, they will let let next steps execute anyway (and the step itself is marked as failed).
The fork is available here:
https://github.com/slaout/cucumber-jvm/tree/continue-next-steps-for-exceptions-1.2.4
It's published on the OSSRH Maven repository.
See the README.md for usage, explanation screenshot and Maven dependency.
It's only available for the Java language, tough: any help is welcome to adapt the code to Ruby, for instance. I don't think it will be a lot of work.
The question is old, but hopefully this will be helpful. What I'm doing feels kind of "wrong", but it works. In your web steps, if you want to keep going, you have to catch exceptions. I'm doing that primarily to add helpful failure messages. I'm checking a table full of values that are identified in Cucumber with a table having a bunch of rows like:
Then my result should be:
| Row Identifier | Column Identifier | Subcolum Identifier | $1,247.50 |
where the identifiers make sense in the application domain, and name a specific cell in the results table in a human-friendly way. I have helpers that convert the human identifiers to DOM IDs, which are used to first check whether the row I'm looking for exists at all, then look for the specific value in a cell in that row. The default failure message for a missing row is clear enough for me (expected to find css "tr#my_specific_dom_id" but there were no matches). But the failure message for checking specific text in a cell is completely unhelpful. So I made a step that catches the exception and uses the Cucumber step info and some element searching to get a good failure message:
Then /^my application domain results should be:$/ do |table|
table.rows.each do |row|
row_id = dom_id_for(row[0])
cell_id = dom_id_for(row[0], row[1], row[2])
page.should have_css "tr##{row_id}"
begin
page.should have_xpath("//td[#id='#{cell_id}'][text()=\"#{row[3].strip.lstrip}\"]")
rescue Capybara::ExpectationNotMet => exception
# find returns a Capybara::Element, native returns a Selenium::WebDriver::Element
contents = find(:xpath, "//td[#id='#{cell_id}']").native.text
puts "Expected #{ row[3] } for #{ row[0,2].join(' ') } but found #{ contents } instead."
#step_failures_were_rescued = true
end
end
end
Then I define a hook in features/support/hooks.rb like:
After do |scenario|
unless scenario.failed?
raise Capybara::ExpectationNotMet if #step_failures_were_rescued
end
end
This makes the overall scenario fail, but it masks the step failure from Cucumber, so all the step results are green, including the ones that aren't right. You have to see the scenario failure, then look back at the messages to see what failed. This seems kind of "bad" to me, but it works. It's WAY more convenient in my case to get the expected and found values listed in a domain-friendly context for the whole table I'm checking, rather than to get a message like "I looked for "$123.45" but I couldn't find it." There might be a better way to do this using the Capybara "within" method. This is the best I've come up with so far though.