I'm trying to code in python an automatisation of the test procedure , by giving the ai the folder of the project and he will get all the files of project (I have done this part) and then it analyze it to give you test procedures steps with expected solutions . I used davinci model for this (text-davinci-003) but the problem is if the project have a lot files he can't analyze it (max 4000 tokens) any solutions?
Related
I have been working on a way to export models from Simulink to a FMU, which we will open source when we have a not-so-buggy version. Me and a collegue finally got a working version and extracted our first FMU from just a zip.
As it turns out, we must be doing something wrong within the program. Our FMU works fine, except for inputs. None of the inputs seem to be working. This have been tested mutliple times, like having a constant go to an out, which works, and I have also tested working FMUs made from our other non-open-source software and they work. I just can't seem to find what is different from theirs to ours FMU.
Here is a dropbox link if anyone wants the source of the test FMU. The model is simple, with one input going straight towards the output and one output getting fed from a constant. Currently, I can read the one output getting a constant, but not the input one. It's always 0. The dropbox folder includes the generated zip file from the model, the model.slx file, the generated FMU and also a folder containing everything inside the FMU. I know we aren't including all sources inside the FMU just yet, but I will fix that when we find out what our issue is with the FMU's. The sources exist inside the zip, so nothing is left out.
If anyone with experience around FMI has had this issue before or maybe have a clue what we could be doing wrong, I would be so greateful if you could share your experience.
I fixed my issue by changing the FMUSDK fmuTemplate.c file to call functions and handle my own inputs and outputs instead.
I have a simple Node JS application and am using Istanbul with Mocha to generate code coverage reports. This is working fine.
If I write a new function, but do not create any tests for it (or even create a test file) is it possible to check for this?
My ultimate goal is for any code which has no tests at all to be picked up by our continuous integration process and for it to fail that build.
Is this possible?
One way you could achieve this is by using code coverage.
"check-coverage": "istanbul check-coverage --root coverage --lines 98 --functions 98 --statements 98 --branches 98"
Just add this in your package.json file, change the threshold if needed. If code is written but no test then the coverage will go down.
I'm not sure if this is the correct way to solve the problem but by running the cover command first and adding the parameter --include-all-sources this then reported on any code without a test file and added them to the coverage.json file it generated.
Then running the check-coverage would fail which is what I'm after. In my CI process I would run cover first, then check-coverage
Personally I find the documentation on Istanbul a little bit confusing/un-clear which is why I didn't see this at first!
Sample situation
I have my own Yeoman generator, which has a folder with "template" of the resulting project.
The generator takes some information from user, interpolates the "template" with the information and then outputs a simple working project.
I want to ensure the "template" is actually working, at least in one positive scenario if not with all combination of inputs. I can write integration tests (which will run the generator with some data and then try to run the resulting code and verify whether all works as expected), but still, that's sometimes too much work and it's inconvenient for trial and error kind of development or some prototyping.
Question
Is there an easy way how to work with the "template" itself, how to run it or use it locally, manually, without the need to run the generator first every time I change a single letter in files of the "template"?
Maybe some sort of build step, which would run the generator for me with some preset data? Is there anything ready in form of npm module? Does a best practice exist?
After running the integration test, you can spawn some commands in the generated project folder and see if those are passing fine.
So far, the best solution I found is to create a script, which:
Creates a temporary sandbox directory.
Performs npm link
Alters the PATH so it does not contain .bin of your local node_modules (this is needed to prevent locally installed Yeoman take precedence over the global one when the script is ran e.g. as npm run develop).
Sets an environment value NON_INTERACTIVE to something truthy.
Runs yo <your generator> in the sandbox directory.
Runs npm start in the sandbox directory to run the freshly generated server code.
Change your generator so it is able to automatically provide some dummy default values for required prompts without default values if process.env.NON_INTERACTIVE is truthy.
Then run the script as:
$ nodemon --watch <directory with your template> --exec <path to your script> --ext js
It's slow, but it works. This way you can develop the template itself and avoid filling the generator every time you need to try out something.
This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.
(I'm using InstallShield2012 V.18)
In setup.rul I defined a function per prototype declaration, included the file with the function definition and compiled it successfully (InstallShield compile).
Now I'd like to test this function (only).
I don't want to run the whole installation, not even test (Ctrl-T) because I want to avoid a complete re-build which takes too long time to do it often.
Is there a way to test only the custom function in InstallShield or per command line?
Not really although I can give you some tips.
Create a dummy feature with a release flag of DEVONLY.
Create a dummy component for that feature.
Create a ProductConfiguration that builds a single MSI with no EXE and a release flag of DEVONLY.
Building this production configuration will be very fast. A couple seconds on my laptop with an SSD. You can selectivly include other features through the use of release flags if you need certain components in order to setup the test environment for your CA.
Another strategy is to develop your CA in a test harness project and then transplant the code into your real installer when you know it all works.
Christopher, thanks for this fast reply. I have to put my answer here because commenting was restricted, because too long.
I also thought about using such a workaround but first wanted to avoid it if possible.
But ok, now I tried these steps, 1 and 2 no problem, but 3: InstallShield didn't allow me to configure a Product Configuration without Setup.exe in my .ism file (although we have IS2012 Pro).
Then I tried to do it in a Basic MSI Project (is that what you meant?), which really builds in very short time. And now I can see my scripting during Test Release, yeah :-)
To "transplant" my script now to the main ism I'm missing an export function for .rul files as it exists for custom actions, but there is only a import. So I will have to copy-paste while switching between ism files, but never mind.