I like the idea of the side-by-side development in JHipster to support easy upgrading, etc as demonstrated in https://www.youtube.com/watch?v=Gg5CYoBdpVo
So instead of manually creating the BandExtendedResource and the BandExtendedRepository, etc, is there already, or could there be either:
a JDL option to indicate side-by-side is going to be used
or a yeoman generator for these files?
I understand this could be a bit circular, in that, the point of the extended classes is that we don't overwrite them, but these could/would be excluded (one way or another) from subsequent generator runs.
Related
With my current workflow I have to check in git and manually read changes like added functions after every import-jdl that touches an entitiy I changed.
Is there a way to add functions to the classes JHipster creates without actually changing the files? Like code generation with anotations or extending JHipster created clasees? I feel like I am missing some important documentation from JHipster, I would be grateful for pointers in the right direction.
Thanks!
I faced this problem in one of my projects and I'm afraid there is no easy way to tell JHipster not to overwrite your changes.
The good news is you have two ways of mitigating this and both will make your life much easier.
Update your entities in a separate branch
The idea is to update your entities (execute the import-jdl command) in a different branch and then, once the whole process is finished merge the changes back to master.
This requires no extra changes to your code. The problem I had with this approach is that sometimes the merges were not trivial and I still had to go through a lot of code just to be sure that everything was still in place and working properly.
Do not change the generated code
This is known as the side-by-side practice. The general idea is that you never change the generated code directly, instead you put your custom code in new files and extend the original ones whenever possible.
This way you can update your entities and JHipster will never remove or modify your custom code.
There are two videos available that will teach you (with examples) how to manage this:
Custom and Generated Code Side by Side by Antonio Goncalves
JHipster side-by-side in practice by David Steiman
In my opinion this is the best approach.
I know this probably isn't the answer you were looking for, but to my knowledge there's no better way.
I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)
Theos lists it supports
Third party frameworks can be placed inside $THEOS/lib, and utilised with instance_EXTRA_FRAMEWORKS. (kirb)
But I am not sure how to make it work or trouble shooting, can someone explain what's this for, and how to use it? If I already build a binary, and the binary needs some frameworks, how to do it?
I tried to follow the samples like putting frameworks under $THEOS/lib and add the flag, but when it runs, for example I am adding AWSCore.framework and AWSS3.framework, it reports library not loaded, image not found
I need to understand how the framework is added into my binary, and what's the run path etc. and how to debug where goes off. Thank you. Does the binary already contains the framework, or I should copy it to somewhere?
I have two versions of a framework both stored under a "thirdparty" directory in my depot. One is in beta which I'm evaluating, and the other is stable. When I first made my workspace, I had it set up to use the stable one, but now I'd like to switch it to use the beta one for testing. I've got a few questions:
Let's say the frameworks are named Framework-2.0-beta and Framework-1.0-stable. Ideally I'd like them to just simply map to a "framework" directory on my local machine, so that I don't have to change all my include paths and such in my project files. Then, in theory, if I wanted to swap back and forth between frameworks, I'd just simply change which one from the depot I'm pulling and then do an update again. How do I do this? I tried at first just mapping them like I mentioned above, but I seem to be getting some errors using this method.
Is this the best way to go about something like this? Like, am I supposed to instead just use a unique workspace for use with one version of the framework vs. another?
Thanks for your help.
The most straight forward way with just perforce means is to put both versions framework the
framework to perforce and map one of them in the clientview of your project.
For example submit the frameworks to places like this:
//thirdparty/framework-2.0-beta/...
//thirdparty/framework-1.0-stable/...
In your projects clientview you map one of the two to a fixed target path, e.g.:
//thirdparty/framework-2.0-beta/... //yourclient/framework/...
So far so good.
But in larger environments (with several people developing the same project) you will definitely run into problems with that approach because:
the compile/test/performance results of your workspace are not
necessarily the same of other people working on the same project
(depending on the clientview)
having several modules (thirdparty or not) and handling them in this
way will be hard to manage and lead to problems with crossdependencies (e.g. module a
version 2 will require module b version > 3, but that doesn't work with certain other
modules, etc.)
There are tools to solve these dependency issues. Look for Apache Ivy or Maven.
Is there any way to check your Facelets files for errors during the build or deployment process?
I am not looking for a solution that will simply validate the Facelets files against their schema, but also to verify that EL expressions are valid. For example if the name of a property or method is misspelled in a EL expression (eg. Value=”#{controller.nme}” instead of value="#{controller.name}"), this will be discovered only during testing at run time.
I am using JBoss 7.1.
Theoretically Eclipse plugins like WTP and JBosd tools can do this, but as of today those only work in the full IDE, not as a seperate command line tool that can be invoked by Ant or Maven.
Worse, those tools have never been perfect. They always report tons of false positives and as a rule of thumb their validation algorithms are usually years behind. The current version of WTP probably just barely validates everything from Java EE 5 (maybe it still misses some obscure feautures).
As a result, if you fail your build based on this validation you'll probably never be able to deploy anything. Even in the most carefully coded and fully correct web apps, WTP and JBoss tools find it necessary to report hundreds or in large projects thousands of warnings and errors. It's IMHO completely useless to depend upon.
This is a sort of a chicken/egg problem. As you said yourself, many EL expressions can only be evaluated at run time.
Keep in mind that EL includes a whole lot more than simple property and method names, it has different implicit objects (params, facesContext, session, etc) which are available in different contexts, and you can also add your own objects to that through many different ways (other Facelets templates, beans which may or may not be registered in the faces-config and even plain Java code inserting objects in the view).
All these things contribute to make very hard to build tooling with this type of checking for you. I think the closest thing for what you want would be to create your own JSF unit tests for each page using JSFUnit and Arquillian and integrate them to your build. As you are targeting JBoss 7, I think that should be feasible.