What is the best and fastest way to clean-up traces of JHipster? - jhipster

When you build with a code generator, sometimes you would like to remove traces, if you will, of anything in the source that clearly shows that you have used a generator to bootstrap your project.
This is something that possibly can be done manually and by refactoring, with some efforts and knowledge of the code and its structure.
Of course, this what you would want to do when your code is stable and you know that you won't be using the generator again.
The Question:
What is the best and fastest way to clean-up traces of JHipster and there scripts out there to do this?

Related

Preventing JHipster import-jdl from overwriting changes when updating entities

With my current workflow I have to check in git and manually read changes like added functions after every import-jdl that touches an entitiy I changed.
Is there a way to add functions to the classes JHipster creates without actually changing the files? Like code generation with anotations or extending JHipster created clasees? I feel like I am missing some important documentation from JHipster, I would be grateful for pointers in the right direction.
Thanks!
I faced this problem in one of my projects and I'm afraid there is no easy way to tell JHipster not to overwrite your changes.
The good news is you have two ways of mitigating this and both will make your life much easier.
Update your entities in a separate branch
The idea is to update your entities (execute the import-jdl command) in a different branch and then, once the whole process is finished merge the changes back to master.
This requires no extra changes to your code. The problem I had with this approach is that sometimes the merges were not trivial and I still had to go through a lot of code just to be sure that everything was still in place and working properly.
Do not change the generated code
This is known as the side-by-side practice. The general idea is that you never change the generated code directly, instead you put your custom code in new files and extend the original ones whenever possible.
This way you can update your entities and JHipster will never remove or modify your custom code.
There are two videos available that will teach you (with examples) how to manage this:
Custom and Generated Code Side by Side by Antonio Goncalves
JHipster side-by-side in practice by David Steiman
In my opinion this is the best approach.
I know this probably isn't the answer you were looking for, but to my knowledge there's no better way.

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

Is there a generic way to consume my dependency's grunt build process?

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?
What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

Resources for learning how to better read code

I recently inherited a large codebase and am having to read it. The thing is, I've usually been the dev starting a project. As a result, I don't have a lot of experience reading code.
My reaction to having to read a lot of code is, well, umm to rewrite it. But I need to bring myself up to speed quickly and build on top of an existing system.
Do other people have techniques they've learned to absorb a code base? At this point, I'm just reading through the code. I've tried generating UML diagrams using UModel. They're so big they won't print cleanly and when I zoom in, I really do lose the perspective of seeing all the relationships.
How have other people dealt with this problem?
Wow - I literally just finished listening to a podcast on reading code!!!
http://www.pluralsight-training.net/community/blogs/pluralcast/archive/2010/03/01/pluralcast-10-reading-code-with-alan-stevens.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+pluralcast+%28Pluralcast+by+Pluralsight%29
I would recommend listening to this. One interesting point that was made that I found radical and may be something you could try (I know I'm going to!). Download the entire source code base. Start editing and refactoring the code then...throw that version away!!! I think with all the demands that we have with deadlines that doing this would not even occur to most developers.
I am in a similar position to you in my own work and I have found the following has worked for me:
- Write test cases on existing code. To be able to write the test case you need to be able to understand the cde base.
- If it is available, look at the bug\issues that have been documented through the life cycle of the product and see how they were resolved.
- Try and refactor some of the code - you'll probably break it, but that's fine you can throw it away and start again. By decomposing the code into smaller problems you'll understand it bettter
You don't need to make drastic changes when refactoring though. When your reading the code and you understand something, rename the variable or the method names so the better reflect the problem the are trying to solve.
Oh and if you can, please get a copy of Working Effectively with Legacy Code by Michael C. Feathers - I think you'll find it invaluable in your situation.
Good luck!
This article provides a guideline
Visualization: a visual representation of the application's design.
Design Violations: an understanding of the health of the object
model.
Style Violations: an understanding of the state the code is currently
in.
Business Logic Review: the ability to test the existing source.
Performance Review: where are the bottlenecks in the source code?
Documentation: does the code have adequate documentation for people
to understand what they're working on?
In general I start at the entry point of the code (main function, plugin hook, etc) and work through the basic execution flow. If its a decent code base it should be broken up into decent size chunks, and you can then go through and figure out what each chunk of the code is responsible for. Looking back at when in the execution flow of the system its called.
For the package/module/class exploration I use whatever doxygen will generate once its been run over the sources. It generates some nice class relation diagrams, inheritance hierarchies and file dependencies graphs. The benefit of these is they are each focused on a single class and how it ties it with its neighbors, siblings and parents, so the graphs are usually of manageable size and easy to understand.
As you understand what different, classes, functions and sub-systems do I like to add comments to fill what sounds like obviously missing documentation. This helps you when you re-read through the code the second time.
I would recommend another podcast and resources:
SE-Radion episode on Software Archeology

Resources