I understand that I have to create an assets.coffee file that configures how to compile and combine files together and the run cake assets:compile to actually do it.
However, I tried that and I get the message: No such task: assets:compile
A related question - will Tower also handle inserting the files into the actual HTML (for example, a layout, or header or footer view)? Because the complied resource names are random every time, I cannot imagine it being some kind of manual work?
The command for compiling assets is cake assets:bundle, https://github.com/viatropos/tower/blob/master/test/example/Cakefile#L15, which runs this method: https://github.com/viatropos/tower/blob/master/packages/tower-application/server/assets.coffee#L22
Related
I am currently trying to parse some invoice data and came across this package on PyPi. It seems to be very handy for this task. There is one problem I can't run it due to 'no template found for ' error. From the documentation [https://pypi.org/project/invoice2data/0.2.31/#description]
it becomes clear that you need to specify a template and then run the following command: invoice2data <invoice_file>.pdf. To specify a template you need to run the command invoice2data --template-folder <yourfolder>.
When executing these commands in my wsl (Linux virtual machine, needed to run package) it keeps complaining. My invoice files are in the map 'tpl' (2 custom yml files), and the invoice file is called invoice.pdf see screenshots. I have attached the invoices too for clearity, please not these are all testfiles. My aim is to first make sure invoice2data operates, and then make my own custom YML template conforming the tutorial.
Somehow invoice2data does not get, it needs to assign a template ( I really don't care about which template at this stage) to execute the parsing. I have looked everywhere on google, there are topics on these, but none offer me a solution. I hope somebody can help me out. Much appreciated, tnx a lot in advance
Jeffrey
1. files and directories
2. YML template files
3. Command line execution gives error
4a. invoice sample 1
4b. invoice sample 2
I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata
I'm trying to set up a build system involving a code generator. The exact files generated are unknown until after the generator is run, but I'd like to be able to run further build steps by pattern matching (run some program on all files with some extension). Is this possible?
Some of the answers here involving code generation seem to assume that the output is known or a listing of generated files is created. This isn't impossible in my case, but I'd like to avoid it since it makes things more complicated.
https://bitbucket.org/scons/scons/wiki/DynamicSourceGenerator seems to indicate that it's possible to add additional targets during Builder actions, but while I could get the build to run and list the generated files, any build steps introduced don't run.
https://bitbucket.org/scons/scons/wiki/NonDeterministicDependencies uses Scanners to add build steps. I put a glob(...) in a scanner, and it succeeds in detecting the generated files, but the files are inexplicably deleted before it actually runs the dependent step.
Is this use case possible? And why is SCons deleting my generated files?
A toy example
source (the file referenced in SConscript)
An example generator, constructs 3 files (not easily known to the build system) and puts them in the argument folder
echo "echo 1" > $1/gen1.txt
echo "echo 2" > $1/gen2.txt
echo "echo 3" > $1/gen3.txt
SConstruct
Just sets up a variant_dir
SConscript('SConscript', variant_dir='build')
SConscript
The goal is for it to:
"Compile" the generator (in this toy example, just copies a file called 'source' and adds execute permissions
Run the "compiled" generator ('source' is a script that generates files)
Perform some operation on each of those generated files by extension. This example just runs the "compile" copy operation on them (for simplicity).
env = Environment()
env.Append(BUILDERS = {'ExampleCompiler' :
Builder(action=[Copy('$TARGET', '$SOURCE'),
Chmod('$TARGET', 0755)])})
generator = env.ExampleCompiler('generator', 'source')
env.Append(BUILDERS = {'GeneratorRun' :
Builder(action=[Mkdir('$TARGET'),
'$SOURCE $TARGET'])})
generated_dir = env.GeneratorRun(Dir('generated'), generator)
Everything's fine up to here, where all the targets are explicitly known to the build system ahead of time.
Attempting to use this block of code to glob over the generated files causes SCons to delete (!!) the generated files:
for generated in generated_dir[0].glob('*.txt'):
generated_run = env.ExampleCompiler(generated.abspath + '.sh', generated)
Attempting to use an action to update the build tree results in additional actions not being run:
def generated_scanner(target, source, env):
for generated in source[0].glob('*.txt'):
print "scanned " + generated.abspath
generated_target = env.ExampleCompiler(generated.abspath + '.sh', generated)
Alias('TopLevelAlias', generated_target)
env.Append(BUILDERS = {'GeneratedOperation' :
Builder(action=[generated_scanner])})
dummy = env.GeneratedOperation(generated_dir[0].File('#dummy'), generated_dir)
Alias('TopLevelAlias', dummy)
The Alias operations are suggested in above dynamic source generator guide, but don't seem to do anything. The prints do execute and indicate that the action gets run.
Running some build pattern on special file extensions is possible with SCons. For C/CPP files this is the preferred scheme, for example:
env = Environment()
env.Program('main', Glob('*.cpp'))
The main task of SCons, as a build system, is to do the minimum amount of work such that all your targets are up-to-date. This makes things complicated for the use case you've described above, because it's not clear how you can reach a "stable" situation where no generated files are added and all targets are built.
You're probably better off by using a simple Python script directly...I really don't see how using SCons (or any other build system for that matter) is mission-critical in this case.
Edit:
At some point you have to tell SCons about the created files (*.txt in your example above), and for tracking all dependencies properly, the list of *.txt files has to be complete. This the task of the Emitter within SCons, which is responsible for returning the list of resulting target and source files for a Builder call. Note, that these files don't have to exist physically during the "parse" phase of SCons. Please also have a look at my answer to Scons: create late targets , which goes into some more detail.
Once you have a proper Emitter in place (see also https://bitbucket.org/scons/scons/wiki/ToolsForFools , "Using Emitters") you should be able to use the Glob('*.txt') call, which will detect and track your created files automatically.
Finally, on our page "Talks and Slides" ( https://bitbucket.org/scons/scons/wiki/TalksAndSlides ) you can find my talk from the PyCon FR.2014, "Why SCons is Not Slow", which explains shortly how SCons works internally. This might be helpful in understanding this problem better and coming up with a full solution.
This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.
Ive setup a GroovyResourceLoader and it seems to get requests for groovy scripts as necessary. I was just wondering is it specially used anywhere besides class loading ? Is there any benefit in simply wrapping a ClassLoader and loading *.groovy files there as opposed to using a GRL ? Are they just different ways to the same end ?
GroovyResourceLoader (GRL) is used by GroovyClassLoader (GCL) and at least since Groovy 1.8 indirectly through GroovyScriptEngine (GSE). But GSE loads it through a GCL as well.
But what GRL does is to "locate" the scripts and return an URL to the location. What GCL does is use the URL returned by a GRL to get the source and compile it to create the class, which is then available for loading.
GRL is a backend for GCL. So they are not different ways to the same end. True, you still have to do more things to actually execute the script code (unless it is precompiled), but "get the script source, compile it, make a Class out of it and finally execute it" are the steps you always have to do. In our GRL/GCL discussion, GRL does part of the first step, GCL itself does the third step. Step 2 is done by CompilationUnit inside GCL and the last step is yours to be done. There are other ways to complete these steps of course, but that's out of the scope for this discussion.