I'm trying to do a DRY cucumber feature and I'm facing a problem of converting a string into an ActiveRecord model name
Given /^the following "(.+)" exist:/ do |mod, table|
table.hashes.each do |t|
mod.create!(t)
end
assert mod.all.count == table.hashes.size
end
that gives
undefined method `create!' for "Balloon":String (NoMethodError)
More elegant solution might be to use a factory, but I'm wondering whether it is possible to use the above approach?
You could look into constantize which turns a String into a constant. Try:
"Balloon".constantize.create!(t)
BUT: Using your app code (models in particular) in a Cucumber step is code smell. Your integration tests shouldn't rely on the code under test at all—think of your app as a black box when you implement Cucumber steps. (Also think of a refactoring of your models that require you to go back and change your Cucumber steps—that's your first clue that you're on the wrong track!)
What you could do to improve this is create the models using an API (if your app implements one).
That way, you only rely on those parts of your app that are public-facing.
On another note: Your Given shouldn't have an assertion, it's more like a before hook in RSpec, setting up a condition for a later assertion...
Related
I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!
If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.
I a little bit of confusion about xUnit for test my DAL.
My goal is to verify that my DAL correctly accesses the DB and extract the right data.
I create a xUnit test project and try to do a simpli test with Moq like this
[Fact]
public void Test1()
{
// Arrange
var mockMyClass = new Mock<IMyClassBLL>();
// Setup a mock stat repository to return some fake data within our target method
mockStAverageCost.Setup(ac => ac.GetBy(It.IsAny<MyClassVO>())).Returns(new List<MyClassVO>
{
new MyClassVO { HCO_ID = "1"},
new MyClassVO { HCO_ID = "2"},
new MyClassVO { HCO_ID = "3"},
new MyClassVO { HCO_ID = "4"}
});
// create our MyTest by injecting our mock repository
var MyTest = new MyClassBLL(mockMyClass.Object);
// ACT - call our method under test
var result = MyTest.GetBy();
// ASSERT - we got the result we expected - our fake data has 6 goals we should get this back from the method
Assert.True(result.Count == 4);
}
The method above work fine.
Now I want access directly to the db for get data.
Obviously something escapes me, I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
Can someone clarify my ideas?
Are you looking for a unit test or an integration test? They're fundamentally different things and serve different purposes.
If your goal is to ensure that GetBy (the unit of functionality under test) does what it's supposed to do, then you should not be using live data. A real connection with real data would introduce variables, causing the test to potentially fail when there's actually nothing wrong with GetBy. For a true unit test, you should only use mocks and test data.
If your goal is to ensure that your application can connect to your database and actually draw data out of it, then that's an integration test. You might potentially use GetBy/your repository, in general, in the test, but generally you'd want to avoid that. Again, connecting and querying directly with via something like ADO.NET serves to remove variables, so if the test fails, you'll know it was because there actually was a problem connecting/querying, in general, rather than just some issue with your repository or a particular method thereof.
Long and short, a good test tests just one thing. If that particular thing requires external components (such as a SQL Server database), then it's an integration test, and at that point, you're testing the integration of the component. Something like a repository method should not come into play, as that would be testing two different things in one test. If you need to test GetBy then there should be no external dependencies, such as a SQL Server database.
Additionally:
I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
This would be an example of testing the framework, which is another no-no. You can safely assume that DI works in ASP.NET Core. It has its own test suite covering that. There is no need for you to add tests for that as well.
This question is very similar to the one raised here. Is just that I cannot make work the proposed solution (and I don't have the reputation to add comments).
I want to create a method with just the global warming potential of CO2,CH4 and N2O associated with fossil fuel combustion.
looking into biosphere database, I created a list of tuples with the flow's keys and characterization factors:
flows_list=[]
for flow in bw.Database('biosphere3'):
if 'Carbon dioxide, fossil' in flow['name']:
flows_list.append((flow.key,1.0))
if flow['name']=='Methane':
flows_list.append((flow.key,29.7))
if flow['name']=='Dinitrogen monoxide':
flows_list.append((flow.key,264.8))
and then:
ipcc2013_onlyfossil=bw.Method(('IPCC2013_onlyfossil','climate change','GWP100a'))
ipcc2013_onlyfossil.register(**{'unit':'kg CO2eq',
'num_cfs':11,
'abbreviation':'nonexistent',
'description':'based on IPCC 2013 method but just for fossil CO2, CH4 and N2O',
'filename':'nonexistent'})
(I don't understand the purpose of the double ** or if we can leave keys in the dictionary metadata empty)
lastly:
ipcc2013_onlyfossil.write(flows_list)
I must be doing something wrong, because If I try to use this method I get an assertion error, Brightway can't find the model.
UPDATE: There was a error in my code, the new method works perfectly fine.
For instance, If I run:
[m for m in bw.methods if 'IPCC' in m[0]
and 'GWP100' in str(m)]
I get at list of methods, including the one I attempted to create and I can do LCA calculations with it.
(PS: it is not very clear to me how I should use the validate() method of the method class..)
There are a bunch of questions here...
How do I create and write a new method?
Your code works perfectly on my computer, and it should - it's doing the same thing that the base methods importer does. Maybe you can be more explicit on what "If I try to use this method I get an assertion error" means?
After running your code, you should be able to do ipcc2013_onlyfossil.metadata or ipcc2013_onlyfossil.load(). It is there!
How should I use the validate() function?
If you ask for the docstring in IPython, you get an idea:
> m.validate?
Signature: m.validate(data)
Docstring: Validate data. Must be called manually.
So you can do ipcc2013_onlyfossil.validate(flows_list), but you don't need to: your code is fine.
What metadata should I provide to Method.register()?
No metadata is required, and you should skip anything you don't know. abbreviation and 'num_cfs' are generated automatically.
What does ** mean?
Good answers already exist
I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers
I am trying to use authlogic's test helpers in Cucumber, calling activate_authlogic.
Our application_controller has a current_user_session method.
When we drop into the debugger mid-story, controller returns a Authlogic::TestCase::MockController.
But when we call controller.current_user_session.
The error occurred while evaluating nil.current_user_session.
How does this mock suddenly become a nil?
And does this mock controller know about our application controllers' code?
I don't know authlogic (and if this answer is helpful at all), but where does that mock object come from in the first place? You shouldn't be using any mocks in you cucumber stories. Cucumber is like an integration test, testing the complete Rails Stack.
I use it, to make sure, that my view, controller and model specs haven't diverged from each other.