So I am writing a test framework with jest-playwright.
One of the things I have not been able to work out is how to get a full strack trace of my errors
This is my framework
I have a file called testapp.ts which contains all the methods to drive the tests eg navigation, login, clicking buttons etc
Then I have test suites eg test1.test.ts this will have tests in it, the tests will then call what methods are required in testapp.ts to drive the test
Now within my methods there will be verification points, my problem is when the verification fails and an error is thrown, it will only show me the line number in the method that failed
What I want is for full stack trace to be shown eg
test1.test.ts > line number of test which calls the method in testapp.ts, and then
testapp.ts > line number of method where verification fails
this is simplifying it and you would think it should be easy to trace without this but in my framework I will have lots of tests calling methods.. those methods will call other methods and somewhere in that path the error will be thrown so I need to be able to follow the path
my methods and tests are all async and my method calls are await I believe that is the reason its happening
however the same tests if using playwright/test stack trace properly so why cant jest? Also same for java with selenium you get the full stack trace
Thanks
Related
Basically, quite a few of my tests use some autogenerated code. And the autogenerated code often throws an Error with a meaningless message - but it has some other fields on it that are quite meaningful.
By default, when a test in Jest throws an Error, jest seems to print the error message. I'd like to add a different handler for a particular subclass of Error that prints the more meaningful text. This will help me determine why my tests are failing faster.
Any ideas would be great!
I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!
If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.
I'm learning DirectX12 and writing some utility classes to encapsulate functionality. Right now I'm working on mechanism for pooling CommandLists.
The pool assumes all command lists are closed. I wanted to validate that during inserting to the pool, but I can't manage to check it. From MSDN:
Returns S_OK if successful; otherwise, returns one of the following
values:
E_FAIL if the command list has already been closed, or an invalid API was called during command list recording.
Which is precisely what I'm looking for, but when I call ID3D12GraphicsCommandList::Close() to validate, it throws exception in KernelBase.dll. It looks really bizarre to me. Is this specification incompliance?
//EDIT: I cannot catch the exception, even with catch(...). It tells me maybe something may be wrong with my setup, but everything else is working for me.
I am looking for some simple answers on how to use funktionality from MSBuild in a c# program. The native documentation seems to be completely useless, because I only find information like:
ConsoleLogger.ApplyParameter
Applies a parameter to the logger
This is the prototype of a explanation, that had better never been written. Neither here, nor under the parameters type explanation you find e.g. a link or any examples about what the parameters might be there for, or their names, or where to find that information
The tutorials I find are all about MSBuild as a standalone tool.
At the moment I need to understand, how to get more information about a failed build:
This method just returns true or false.
bool success = project.Build(new string[] { "Build", "Deploy"}, fileLogger);
Also I need understand how to configure the filelogger, and how to use it from project.
Microsoft.Build.Logging.FileLogger fileLogger = new Microsoft.Build.Logging.FileLogger();
For the particular example in your question, ApplyParameter works the same way that the console logger parameters (/clp) work from the command line.
> msbuild /?
...
/consoleloggerparameters:<parameters>
Parameters to console logger. (Short form: /clp)
The available parameters are:
PerformanceSummary--Show time spent in tasks, targets
and projects.
Summary--Show error and warning summary at the end.
NoSummary--Don't show error and warning summary at the
end.
ErrorsOnly--Show only errors.
WarningsOnly--Show only warnings.
NoItemAndPropertyList--Don't show list of items and
properties at the start of each project build.
ShowCommandLine--Show TaskCommandLineEvent messages
ShowTimestamp--Display the Timestamp as a prefix to any
message.
ShowEventId--Show eventId for started events, finished
events, and messages
ForceNoAlign--Does not align the text to the size of
the console buffer
DisableConsoleColor--Use the default console colors
for all logging messages.
DisableMPLogging-- Disable the multiprocessor
logging style of output when running in
non-multiprocessor mode.
EnableMPLogging--Enable the multiprocessor logging
style even when running in non-multiprocessor
mode. This logging style is on by default.
Verbosity--overrides the /verbosity setting for this
logger.
Example:
/consoleloggerparameters:PerformanceSummary;NoSummary;
Verbosity=minimal
So for the example shown in the help,
logger.ApplyParameter("PerformanceSummary", "NoSummary");
logger.ApplyParameter("Verbosity", "minimal");
If you need a high degree of control over a logger you are attaching to the build engine from code, you might want to consider writing your own logger rather than trying to interpret/parse the text output from the stock console logger.
I am trying to use authlogic's test helpers in Cucumber, calling activate_authlogic.
Our application_controller has a current_user_session method.
When we drop into the debugger mid-story, controller returns a Authlogic::TestCase::MockController.
But when we call controller.current_user_session.
The error occurred while evaluating nil.current_user_session.
How does this mock suddenly become a nil?
And does this mock controller know about our application controllers' code?
I don't know authlogic (and if this answer is helpful at all), but where does that mock object come from in the first place? You shouldn't be using any mocks in you cucumber stories. Cucumber is like an integration test, testing the complete Rails Stack.
I use it, to make sure, that my view, controller and model specs haven't diverged from each other.