I have a framework called SampleFramework. This framework is used by SampleFrameworkTestApp. SampleFramework uses DependentFramework. I have added DependentFramework in the Cartfile of SampleFramework. SampleFramework is added as dependency in Cartfile of SampleFrameworkTestApp. If I run carthage update from SampleFrameworkTestApp it also downloads DependentFramework, which is expected. But is there a way to build SampleFramework without linking it to DependentFramework at all? I don't want to have two copies of DependentFramework. One in SampleFramework and one is SampleFrameworkTestApp. I want to avoid pushing DependentFramework as part of SampleFramework on git
Related
I am following the steps of this page:
https://learn.microsoft.com/en-us/composer/how-to-create-custom-actions
and have reached to the part where it is required to merge the two schemas: bots and your custom one. However when running the powershell script found inside the created project template (CoreAssistant template) I have the following error:
Error conflicting definitions of HelpDialog.dialog :
C:\Users\user\source\repos\AvanadeCoreAssistant\AvanadeCoreAssistant\dialogs\imported\Microsoft.Bot.Components.HelpAndCancel\HelpDialog\HelpDialog.dialog
Microsoft.Bot.Components.HelpAndCancel:
C:\Users\user.nuget\packages\microsoft.bot.components.helpandcancel\1.1.2\exported\HelpDialog\HelpDialog.dialog
Error conflicting definitions of HelpDialog.en-us.lu.dialog :
C:\Users\user\source\repos\AvanadeCoreAssistant\AvanadeCoreAssistant\dialogs\imported\Microsoft.Bot.Components.HelpAndCancel\HelpDialog\recognizers\HelpDialog.en-us.lu.dialog
Microsoft.Bot.Components.HelpAndCancel:
C:\Users\user.nuget\packages\microsoft.bot.components.helpandcancel\1.1.2\exported\HelpDialog\recognizers\HelpDialog.en-us.lu.dialog
Error conflicting definitions of HelpDialog.lu.dialog
HelpDialog is a predefined dialog that was already present. I have installed NodeJs and #botframework-cli package because it was required from the powershell script in order to run bf dialog:merge and now it seems these two sources have some kind of conflict.
To add to other answers, changes that worked for me are as follows:
update-schema.ps1, line 11, change "!**/generated" to "!../generated" and add "!../dialogs/imported"
Also, make sure that your custom action project is INSIDE of your bot directory, it should be a folder next to the "schemas" folder for the script to find it.
Note/Edit: Having the project nested inside the bot works to get the script working, though I do not recommend it due to causing other errors. Oddly, I found it was best to move the whole custom solution up a level, next to the bot project. You may have to edit the [botName].sln file in notepad to reference the location of the project, as well as editing the bot project's project reference.
I fixed it by changing the script. I noticed the script was trying to ignore the folder imported and generated but the error message indicated it was not doing so. So I changed it from !**/generated to !../generated.
I experienced the same issue.
To fix this problem you could simply delete the corresponding dialogs in the "imported" folder. Note that this will, however, delete these dialogs in your bot, which is not optimal but should be of little concern for a sample application.
Sample situation
I have my own Yeoman generator, which has a folder with "template" of the resulting project.
The generator takes some information from user, interpolates the "template" with the information and then outputs a simple working project.
I want to ensure the "template" is actually working, at least in one positive scenario if not with all combination of inputs. I can write integration tests (which will run the generator with some data and then try to run the resulting code and verify whether all works as expected), but still, that's sometimes too much work and it's inconvenient for trial and error kind of development or some prototyping.
Question
Is there an easy way how to work with the "template" itself, how to run it or use it locally, manually, without the need to run the generator first every time I change a single letter in files of the "template"?
Maybe some sort of build step, which would run the generator for me with some preset data? Is there anything ready in form of npm module? Does a best practice exist?
After running the integration test, you can spawn some commands in the generated project folder and see if those are passing fine.
So far, the best solution I found is to create a script, which:
Creates a temporary sandbox directory.
Performs npm link
Alters the PATH so it does not contain .bin of your local node_modules (this is needed to prevent locally installed Yeoman take precedence over the global one when the script is ran e.g. as npm run develop).
Sets an environment value NON_INTERACTIVE to something truthy.
Runs yo <your generator> in the sandbox directory.
Runs npm start in the sandbox directory to run the freshly generated server code.
Change your generator so it is able to automatically provide some dummy default values for required prompts without default values if process.env.NON_INTERACTIVE is truthy.
Then run the script as:
$ nodemon --watch <directory with your template> --exec <path to your script> --ext js
It's slow, but it works. This way you can develop the template itself and avoid filling the generator every time you need to try out something.
This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.
(I'm using InstallShield2012 V.18)
In setup.rul I defined a function per prototype declaration, included the file with the function definition and compiled it successfully (InstallShield compile).
Now I'd like to test this function (only).
I don't want to run the whole installation, not even test (Ctrl-T) because I want to avoid a complete re-build which takes too long time to do it often.
Is there a way to test only the custom function in InstallShield or per command line?
Not really although I can give you some tips.
Create a dummy feature with a release flag of DEVONLY.
Create a dummy component for that feature.
Create a ProductConfiguration that builds a single MSI with no EXE and a release flag of DEVONLY.
Building this production configuration will be very fast. A couple seconds on my laptop with an SSD. You can selectivly include other features through the use of release flags if you need certain components in order to setup the test environment for your CA.
Another strategy is to develop your CA in a test harness project and then transplant the code into your real installer when you know it all works.
Christopher, thanks for this fast reply. I have to put my answer here because commenting was restricted, because too long.
I also thought about using such a workaround but first wanted to avoid it if possible.
But ok, now I tried these steps, 1 and 2 no problem, but 3: InstallShield didn't allow me to configure a Product Configuration without Setup.exe in my .ism file (although we have IS2012 Pro).
Then I tried to do it in a Basic MSI Project (is that what you meant?), which really builds in very short time. And now I can see my scripting during Test Release, yeah :-)
To "transplant" my script now to the main ism I'm missing an export function for .rul files as it exists for custom actions, but there is only a import. So I will have to copy-paste while switching between ism files, but never mind.
Basically I have a project A inside my cruisecontrol that has 2 different triggers. One is an intervalTrigger which checks to see if modification exists in the repository and the builds the project A. And the other one is a projectTrigger which makes the project A gets build if project B is built. Now I have a executable file and I only want this to deppen on my intervalTrigger and not on the projectTrigger. Is that possible???? How????
I'm not sure I understand what you're asking, but you can always create a new project C and set just the interval trigger to execute your file.