Unit Testing File with Space in the Name - python-3.x

I have a Python project where the main file has spaces in the name. I understand that this is not encouraged, but in this case, I think it's necessary. The file is being compiled into a stand alone executable using py2app--which uses the file name for the application name when building the executable (app name, menu references, etc.). This works fine because the base file is not imported anywhere within the project and py2app handles the spaces gracefully. Let's call the file Application Name.py.
In order to run unit tests against Application Name.py, however, I have to eliminate the spaces in order to import the file into unittest. I'm unable to use importlib or __import__ because the file is not constructed as a package, so both approaches fail. My workflow has been to refactor the file name to application_name.py, run the unit tests, and then refactor the name to Application Name.py before compiling it into Application Name.app.
So the options appear to be:
Keep doing what I'm doing (workable, but not ideal),
Create a wrapper called Application Name.py that imports application_name.py where the wrapper doesn't need to be unit tested (seems silly),
Convert Application Name into a package so I can use importlib (seems like overkill), or
Something else entirely.
Is there some way to gracefully handle file names with spaces in unit testing that I'm not seeing or should I just suck it up?

Seems like option 2 probably works best.
Option 1 and 2 are your best bets (yes 3 is a bit overkill), and although 2 seems excessive, it does isolate your python logic from your py2app requirements - now Application Name.py is a "py2app wrapper file", and application_name.py contains your actual logic.
This works better than Option 1 because separation of responsibilities is generally preferred. If you come up with other requirements for your application name, you'd want to have to deal with just the "py2app wrapper file", and not change anything related to actual logic.
Your current workflow works too, but it does mean more manual renaming when you want to run unit tests - what if you want to automate the unit testing process?

Related

How to get SMAC3 working for Python 3x on Windows

This is a great package for Bayesian optimization of hyperparameters (especially mixed integer/continuous/categorical...and has shown to be better than Spearmint in benchmarks). However, clearly it is meant for Linux. What do I do...?
First you need to download swig.exe (the whole package) and unzip it. Then drop it somewhere and add the folder to path so that the installer for SMAC3 can call swig.exe.
Next, the Resource module is going to cause issues because that is only meant for Linux. That is specifically used by Pynisher. You'll need to comment out import pynisher in the execute_func.py module. Then, set use_pynisher:bool=False in the def __init__(self...) in the same module. The default is true.
Then, go down to the middle of the module where an if self.use_pynisher....else statement exists. Obviously our code now enters the else part, but it is not setup correctly. Change result = self.ta(config, **obj_kwargs) to result = self.ta(list(config.get_dictionary().values())). This part may need to be adjusted yet depending on what kind of inputs your function handles, but essentially you can see that this will enable the basic example shown in the included branin_fmin.py module. If doing the random forest example, don't change at all...etc.

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Access test resources within Haskell tests

This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.

InstallShield: How can single custom actions be tested?

(I'm using InstallShield2012 V.18)
In setup.rul I defined a function per prototype declaration, included the file with the function definition and compiled it successfully (InstallShield compile).
Now I'd like to test this function (only).
I don't want to run the whole installation, not even test (Ctrl-T) because I want to avoid a complete re-build which takes too long time to do it often.
Is there a way to test only the custom function in InstallShield or per command line?
Not really although I can give you some tips.
Create a dummy feature with a release flag of DEVONLY.
Create a dummy component for that feature.
Create a ProductConfiguration that builds a single MSI with no EXE and a release flag of DEVONLY.
Building this production configuration will be very fast. A couple seconds on my laptop with an SSD. You can selectivly include other features through the use of release flags if you need certain components in order to setup the test environment for your CA.
Another strategy is to develop your CA in a test harness project and then transplant the code into your real installer when you know it all works.
Christopher, thanks for this fast reply. I have to put my answer here because commenting was restricted, because too long.
I also thought about using such a workaround but first wanted to avoid it if possible.
But ok, now I tried these steps, 1 and 2 no problem, but 3: InstallShield didn't allow me to configure a Product Configuration without Setup.exe in my .ism file (although we have IS2012 Pro).
Then I tried to do it in a Basic MSI Project (is that what you meant?), which really builds in very short time. And now I can see my scripting during Test Release, yeah :-)
To "transplant" my script now to the main ism I'm missing an export function for .rul files as it exists for custom actions, but there is only a import. So I will have to copy-paste while switching between ism files, but never mind.

Resources