NUnit Inconclusive Confusion - visual-studio-2012

I have the following:-
[TestFixture]
class TaskServiceTest
{
public void Implements_ITaskService()
{
var service = CreateService();
Assert.That(service, Is.InstanceOf<ITaskService>());
}
private static ITaskService CreateService()
{
return null;
}
}
When I run that in Visual Studio / Resharper It is reported as 'Inconclusive'. The explanation of which in the NUnit Docs is
The Assert.Inconclusive method indicates that the test could not be completed with the data available. It should be used in situations where another run with different data might run to completion, with either a success or failure outcome.
I don't see that holding here, so can anyone explain what I am doing wrong?
Thanks

I just realised that it is because I missed the [Test] attribute off of the unit test.
[Test]
public void Implements_ITaskService()
{
var service = CreateService();
Assert.That(service, Is.InstanceOf<ITaskService>());
}

Related

What causes azure-hosted hangfire jobs to behave like this?

I have been having trouble for a while with two recurring jobs (the ones at the top of this list) that don't get run even though they are scheduled.
I can trigger them just fine, and they get rescheduled, but when the schedule time comes around they don't run, and the "Next Execution" time just slips into the past.
Now there are a bunch of other jobs that are having the same problem. These are supposed to run hourly, but if they get past the schedule time they just don't run.
Visiting the dashboard makes no difference. The web app is always on. Hangfire will never run these jobs unless I trigger them manually. Jobs that AREN'T in this state still run just fine as scheduled every day or every hour.
What would cause this?
My hangfire instance (version 1.7.6) is in an Azure WebApp that is set to be always running. It uses an Azure-sql database for its data store.
Here's my Bootstrapper.cs code:
using System.Configuration;
using System.Web.Hosting;
using Hangfire;
namespace MyApi
{
public class HangfireBootstrapper : IRegisteredObject
{
public static readonly HangfireBootstrapper Instance = new HangfireBootstrapper();
private readonly object _lockObject = new object();
private bool _started;
private BackgroundJobServer _backgroundJobServer;
private HangfireBootstrapper()
{
}
public void Start()
{
lock (_lockObject)
{
if (_started) return;
_started = true;
HostingEnvironment.RegisterObject(this);
var jobOptions = new BackgroundJobServerOptions();
jobOptions.ServerName = ConfigurationManager.AppSettings.Get("hangfire:servername");
jobOptions.Queues = new[] {"k1"};
GlobalConfiguration.Configuration
.UseSqlServerStorage("Kdb");
_backgroundJobServer = new BackgroundJobServer(jobOptions);
}
}
public void Stop()
{
lock (_lockObject)
{
if (_backgroundJobServer != null)
{
_backgroundJobServer.Dispose();
}
HostingEnvironment.UnregisterObject(this);
}
}
void IRegisteredObject.Stop(bool immediate)
{
Stop();
}
}
}
Here's code that is used to queue the majority of the jobs:
jobMgr.AddOrUpdate($"Script.{i1}.{name}", Job.FromExpression(() => HangfireJobs.ReplayQueue.EnqueueScript(scriptId, i1, null)),cronExpression);
Here's how my EnqueueScript method is defined:
[Queue("k1")]
public static void EnqueueScript(Guid scriptId, int env, PerformContext context)
{
try
{
...
This issue was resolved by upgrading Hangfire to version 1.7.8.
The Hangfire bug report is viewable at https://github.com/HangfireIO/Hangfire/issues/1459
It appears to have been introduced around version 1.7.4 (maybe earlier, but certainly by then) and fixed with version 1.7.8.

How to use Class Initialize so that application opens in beginning,run all the tests & closes in the end

My code:
This is initialize method
[TestInitialize()]
public void MyTest Initialize()
{}
This is test 1
[TestMethod]
public void Validate_Create_Command()
{ }
This is test 2
[TestMethod]
public void Validate_Delete_Command()
{}
Right Now test1 opens application & closes the application &
test2 also opens the application & closes.
My question is how to open application once & close application after all tests completes
First I would recommend you always open at the beginning of the test and close at the end. Your recordings should be flexible enough that you can combine them to navigate to different parts of the app. I'll answer how best to do that in a moment first your actual question.
If you want to open at the start and close at the end I use this pattern
[TestClass]
public class Tests
{
[TestMethod]
public void TestMethod1()
{
UIMap.ClickNext();
UIMap.ClickPlusButton();
UIMap.AssertStuff();
}
[TestMethod]
public void TestMethod2()
{
UIMap.ClickNext();
UIMap.ClickMinusButton();
UIMap.AssertStuff();
}
[ClassInitialize()]
public static void ClassInitialize(TestContext testcontext)
{
Utilities.Launch();
}
[ClassCleanup()]
public static void ClassCleanup()
{
Utilities.Close();
}
}
public static class Utilities
{
private static ApplicationUnderTest App;
public static Launch()
{
try
{
App = ApplicationUnderTest.Launch(pathToExe);
}
catch (Microsoft.VisualStudio.TestTools.UITest.Extension.FailedToLaunchApplicationException e) {}
}
public static Close()
{
App.Close();
App = null;
}
}
To do this on a test to test basis you simple use the normal (below)
[TestInitialize()] and [TestCleanup()]
You could copy the method calls to launch and close the application from the test methods into the initialize and cleanup methods, then delete the calls from the test methods.
The way that Coded UI managed applications between test cases changed between Visual Studio 2010 and 2012, also the way that CloseOnPlaybackCleanup worked. For more details see http://blogs.msdn.com/b/visualstudioalm/archive/2012/11/08/using-same-applicationundertest-browserwindow-across-multiple-tests.aspx
You will need to re-record test 1 and test 2 to no longer open/close the application.
In TestInitialize, record the launching of your application.
In TestCleanup, record the closing of your application.
What will happen when you run the CodedUI test is:
Step 1: TestInitialize runs which launches your application
Step 2: Test1 and Test2 run (again, you will have removed
launching/closing of your app)
Step 3: TestCleanup runs which closes your application
#region Additional test attributes
//Use TestInitialize to run code before running each test
[TestInitialize()]
public void MyTestInitialize()
{
this.UIMap.OpenMyApplication();
}
//Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
this.UIMap.CloseMyApplication();
}
#endregion

BeforeFeature/AfterFeature does not work using SpecFlow and Coded UI

I am not able to define a [BeforeFeature]/[AfterFeature] hook for my feature file. The application under test is WPF standalone desktop applications.
If I use [BeforeScenario]/[AfterScenario] everything works fine, the application starts without any problem, the designed steps are performed correctly and the app is closed.
Once I use the same steps with [BeforeFeature]/[AfterFeature] tags the application starts and the test fails with:
The following error occurred when this process was started: Object reference not set to an instance of an object.
Here is an example:
[Binding]
public class Setup
{
[BeforeScenario("setup_scenario")]
public static void BeforeAppScenario()
{
UILoader.General.StartApplication();
}
[AfterScenario("setup_scenario")]
public static void AfterAppScenario()
{
UILoader.General.CloseApplication();
}
[BeforeFeature("setup_feature")]
public static void BeforeAppFeature()
{
UILoader.General.StartApplication();
}
[AfterFeature("setup_feature")]
public static void AfterAppFeature()
{
UILoader.General.CloseApplication();
}
}
StartApplication/CloseApplication were recorded and auto-generated with Coded UI Test Builder:
public void StartApplication()
{
// Launch '%ProgramFiles%\...
ApplicationUnderTest Application = ApplicationUnderTest.Launch(this.StartApplicationParams.ExePath, this.StartApplicationParams.AlternateExePath);
}
public class StartApplicationParams
{
public string ExePath = "C:\\Program Files..."
public string AlternateExePath = "%ProgramFiles%\\..."
}
Noteworthy: I'm quite new with SpecFlow.
I can't figure it out why my test fails with [BeforeFeature] and works fine with [BeforeScenario].
It would be great if somebody could help me with this issue. Thanks!
I ran into a similar problem recently. Not sure if this can still help you, but it may be of use for people who stumble upon this question.
For BeforeFeature\AfterFeature to work, the feature itself needs to be tagged, tagging just specific scenarios will not work.
Your feature files should start like this:
#setup_feature
Feature: Name Of Your Feature
#setup_scenario
Scenario: ...

Geb Reporter / Extension to retrieve Test Status

I am attempting to replace some custom java selenium extensions by utilizing geb. I have hit a bit of a brick wall when I attempt to utilize a grid in the cloud (i.e. SauceLabs). When my tests complete, it'd be nice to send an update back to indicate whether or not the test has failed or succeeded. To utilize this, I need the sessionId from the RemoteWebDriver instance. This can be obtained in a custom Reporter, however I can't determine the success with this interface. Since I am extending the GebReportingSpec, I attempted to create my own custom version, which had a custom Junit rule to track success or failure:
public class TestSuccess extends TestWatcher {
boolean success;
String message;
#Override
protected void starting(Description d) {
message = d.getMethodName();
}
#Override
protected void succeeded(final Description description) {
System.out.println("Test Success [succeeded] " + description);
this.success = true;
}
#Override
protected void failed(final Throwable e, final Description description) {
System.out.println("Test Success [failed] " + description);
this.success = false;
}
public boolean isSuccess() {
return success;
}
#Override
public String toString() {
return message + " success: <" + success + ">.";
}
}
I then added that to my CustomReportingSpec:
class CustomReportingSpec extends GebReportingSpec {
/* I also tried creating this as a RuleChain with:
* #Rule TestRule chain = RuleChain.outerRule(
super._gebReportingSpecTestName).around(new TestSuccess());
* however, this results in a NPE. Placing the super rule in the around
* still results in a NPE.
*/
#Rule public TestSuccess _gebTestSuccesswatcher = new TestSuccess();
// I never see this called
void report() {
System.out.println("Custom Reporting Spec: " + _gebTestSuccesswatcher + "\t")
super.report()
}
}
I have also attempted to set this up in a custom reporter:
public CustomReporter extends ScreenshotAndPageSourceReporter implements Reporter {
#Rule
public TestSuccess _gebTestSuccesswatcher = new TestSuccess();
#Override
public void writeReport(Browser browser, String label, File outputDir) {
System.out.println("Custom Reporter: " + _gebTestSuccesswatcher);
super.writeReport(browser, label, outputDir)
}
}
However, regardless of whether or not my test fails, the success method on the watcher seems to be called. Here is my sample test:
class OneOff extends CustomReportingSpec {
def "Check One off"() {
when:
go "http://www.google.com"
then:
1 == 2
}
}
And the output:
Custom Reporter: null success: <false>.
Test Success [succeeded] Check One off(OneOff)
As you can see the success method is called upon completion of this failing test. If I modify the test to pass (i.e. 1 == 1), here is my output:
Custom Reporter: null success: <false>.
Test Success [succeeded] Check One off(OneOff)
Is there any way for me to get this Rule to work properly in the Custom Reporter? Or is there a way to get the browser instance in an extension? I've followed this guide to create a custom annotation and listener, but I can't access the Browser object. I have attempted adding an #Shared onto the declaration of the browser, but it isn't pulling the one in the Geb Configuration.
Your TestSuccess class doesn't work correctly due to a known limitation in Spock's TestRule support. Due to subtle differences between Spock's and JUnit's test execution model, calling base.evaluate() from a TestRule will not throw an exception in Spock, even if the test has failed. In many cases this won't make a difference, but for TestWatcher it will.
This is the only known limitation in Spock's rule support, and hopefully we'll find a way to overcome it at some point. There is no such semantic mismatch when using MethodRule.
If you want to implement your requirement with the help of a JUnit rule (which I think is fine), MethodRule is probably the better choice anyway. In contrary to TestRule, MethodRule provides access to the test instance, which will allow you to grab the session ID with browser.driver.sessionId.

Spec fails when run by mspec.exe, but passes when run by TD.NET

I wrote about this topic in another question.
However, I've since refactored my code to get rid of configuration access, thus allowing the specs to pass. Or so I thought. They run fine from within Visual Studio using TestDriven.Net. However, when I run them during rake using the mspec.exe tool, they still fail with a serialization exception. So I've created a completely self-contained example that does basically nothing except setup fake security credentials on the thread. This test passes just fine in TD.Net, but blows up in mspec.exe. Does anybody have any suggestions?
Update: I've discovered a work-around. After researching the issue, it seems the cause is that the assembly containing my principal object is not in the same folder as the mspec.exe. When mspec creates a new AppDomain to run my specs, that new AppDomain has to load the assembly with the principal object in order to deserialize it. That assembly is not in the same folder as the mspec EXE, so it fails. If I copied my assembly into the same folder as mspec, it works fine.
What I still don't understand is why ReSharper and TD.Net can run the test just fine? Do they not use mspec.exe to actually run the tests?
using System;
using System.Security.Principal;
using System.Threading;
using Machine.Specifications;
namespace MSpecTest
{
[Subject(typeof(MyViewModel))]
public class When_security_credentials_are_faked
{
static MyViewModel SUT;
Establish context = SetupFakeSecurityCredentials;
Because of = () =>
SUT = new MyViewModel();
It should_be_initialized = () =>
SUT.Initialized.ShouldBeTrue();
static void SetupFakeSecurityCredentials()
{
Thread.CurrentPrincipal = CreatePrincipal(CreateIdentity());
}
static MyIdentity CreateIdentity()
{
return new MyIdentity(Environment.UserName, "None", true);
}
static MyPrincipal CreatePrincipal(MyIdentity identity)
{
return new MyPrincipal(identity);
}
}
public class MyViewModel
{
public MyViewModel()
{
Initialized = true;
}
public bool Initialized { get; set; }
}
[Serializable]
public class MyPrincipal : IPrincipal
{
private readonly MyIdentity _identity;
public MyPrincipal(MyIdentity identity)
{
_identity = identity;
}
public bool IsInRole(string role)
{
return true;
}
public IIdentity Identity
{
get { return _identity; }
}
}
[Serializable]
public class MyIdentity : IIdentity
{
private readonly string _name;
private readonly string _authenticationType;
private readonly bool _isAuthenticated;
public MyIdentity(string name, string authenticationType, bool isAuthenticated)
{
_name = name;
_isAuthenticated = isAuthenticated;
_authenticationType = authenticationType;
}
public string Name
{
get { return _name; }
}
public string AuthenticationType
{
get { return _authenticationType; }
}
public bool IsAuthenticated
{
get { return _isAuthenticated; }
}
}
}
Dan,
thank you for providing a reproduction.
First off, the console runner works differently than the TestDriven.NET and ReSharper runners. Basically, the console runner has to perform a lot more setup work in that it creates a new AppDomain (plus configuration) for every assembly that is run. This is required to load the .dll.config file for your spec assembly.
Per spec assembly, two AppDomains are created:
The first AppDomain (Console) is created
implicitly when mspec.exe is
executed,
a second AppDomain is created by mspec.exe for the assembly containing the specs (Spec).
Both AppDomains communicate with each other through .NET Remoting: For example, when a spec is executed in the Spec AppDomain, it notifies the Console AppDomain of that fact. When Console receives the notification it acts accordingly by writing the spec information to the console.
This communiciation between Spec and Console is realized transparently through .NET Remoting. One property of .NET Remoting is that some properties of the calling AppDomain (Spec) are automatically included when sending notifications to the target AppDomain (Console). Thread.CurrentPrincipal is such a property. You can read more about that here: http://sontek.vox.com/library/post/re-iprincipal-iidentity-ihttpmodule-serializable.html
The context you provide will run in the Spec AppDomain. You set Thread.CurrentPrincipal in the Because. After Because ran, a notification will be issued to the Console AppDomain. The notification will include your custom MyPrincipal that the receiving Console AppDomain tries to deserialize. It cannot do that since it doesn't know about your spec assembly (as it is not included in its private bin path).
This is why you had to put your spec assembly in the same folder as mspec.exe.
There are two possible workarounds:
Derive MyPrincipal and MyIdentity from MarshalByRefObject so that they can take part in cross-AppDomain communication through a proxy (instead of being serialized)
Set Thread.CurrentPrincipal transiently in the Because
(Text is required for formatting to work -- please ignore)
Because of = () =>
{
var previousPrincipal = Thread.CurrentPrincipal;
try
{
Thread.CurrentPrincipal = new MyPrincipal(...);
SUT = new MyViewModel();
}
finally
{
Thread.CurrentPrincipal = previousPrincipal;
}
}
ReSharper, for example, handles all the communication work for us. MSpec's ReSharper Runner can hook into the existing infrastructure (that, AFAIK, does not use .NET Remoting).

Resources