In previous spring boot apps I have worked with, hibernate has been whats taken care of changes to the database, whether it be the data imports on startup or modifying the database if the entity's change.
So if I changed an entity the spring.jpa.hibernate.ddl-auto setting would be what determines how the database is changed. Is liquibase what is doing this in a jhipster app? I'm mostly concerned about what liquibase does to modify the database, apposed to the versioning aspect
The Liquibase creates 2 tables for its own purposes DATABASECHANGELOG and DATABASECHANGELOGLOCK.
DATABASECHANGELOG keeps records of what all changesets have been applied on the database so far.
JHipster uses Liquibase to manage the database updates, and stores its configuration in the /src/main/resources/config/liquibase/ directory.
If you prefer (or need) to do a database update manually, here is the development workflow:
Modify your JPA entity (add a field, a relationship, etc.)
Create a new “change log” in your src/main/resources/config/liquibase/changelog directory.
Add this “change log” file in your src/main/resources/config/liquibase/master.xml file, so it is applied the next time you run your application
If you want to skip Liquibase migrate task during startup application, you can using profile "no-liquibase". Ex: spring.profiles.active=prod,no-liquibase
More information on using Liquibase, please go to http://www.liquibase.org.
Related
I am new to Jhipster application.
I have created entity. But now i want to delete this entity.
could you please anyone tell me about command and help me on this activity.
Unfortunately, it must be done manually.
The easiest way is to revert the git commit of the entity creation. This is why it's a good practice to always commit after entity creation.
Beside deleting files, you may also want to create a Liquibase changelog to drop the entity's table.
I have create my app with jhipster in july, and i have put it in production.
Now i need edit a entity.
When i use jhipster entity the subgenerator update the initial xml generate for the entity, but not is thes the correct work, the code need create a new xml with the update like: mvn liquibase:diff
Searching on the web i have foud this answere: Add new field to existing entity with JHipster. At a certain point the user say:
create a new migration file to add only the new column (see addColumn
documentation), revert the creation migration to its original content
using git, run your app to apply changes to your database. This is
what you would do when your app is in production.
In reference the last phrase, is the true? Jhipster entity subgenerator not support update of db but only initial creation?
tks
Yes it's correct.
Either you write a manual Liquibase migration or you use liquibase:diff to help you in this process.
See official docs: https://www.jhipster.tech/development/#database-updates-with-the-maven-liquibasediff-goal
So for our use case all of our custom entity's already exist in the database. We dont want liquibase to handle any changes/updates to the entities in the database. I know this can be achieved by using liquibase.enabled: false in the yml file for the relevant profile. We will be using a hibernate setting to not start the app if the mappings to the database table arent correct.
However we would still like the JHipster generated classes for User/Authority/etc to be generated. Jhipster handles this with liquibase. I know on 1st application run up I can run with liquibase enabled and then from then on run with liquibase disabled?
Is there a better workflow for this I could do through configuration?
You can run Liquibase as a command line tool. In my project, we modified our pom.xml so that the build generates both the app jar and a zip of the Liquibase migrations, both artifacts are deployed to a Nexus repository and our deployment process (based on Ansible) executes Liquibase on unzipped migrations retrieved from the Nexus repo.
In addition, you can also use Liquibase contexts to restrict some migrations to some environments only: e.g to init admin password only in dev or test.
Consider the following deployment workflow, where staging and product use the same database:
Deploy application to Azure Website staging slot
Run EF code first migrations in staging (updating the shared database)
Test application in staging
Swap endpoints so that what was deployed to staging is now in the production slot
This process worked for a few releases, but today it didn't. The issue is around migrations ran in the Staging slot against the same database as the Production slot. We're every careful to not rename or remove any columns that would break production between steps 2 and 4. Today's migrations included a new foreign key and a script to create some tables for a forum control (there are no entities for these new tables, it's managed entirely by a third party control - we only added the migration so that we didn't have to manually run the install script).
We got the standard AutomaticMigrationsDisabledException below in the production slot after step 2.
System.Data.Entity.Migrations.Infrastructure.AutomaticMigrationsDisabledException:
Unable to update database to match the current model because there are
pending changes and automatic migration is disabled. Either write the
pending model changes to a code-based migration or enable automatic
migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to
true to enable automatic migration.
Is there a list of known changes that cause this exception? Or are migrations just not designed to be used this way?
We've also considered something like what SO does, but we'd rather not recreate the wheel if we don't have to.
We are running CRM in an enterprise style environment, with 4 environments that changes need to propagate through before entering production. We are also dealing with a very large dynamic dataset and complex logic implemented in SQL server working from this dataset, along side the CRM components.
Our environments are as follows:
Development
Test
Staging
Production
To facilitate development, testing, etc we restore our dataset and CRM from Production back to the various environments as needed, in Development we restore very frequently (usually nightly).
One of the challenges we have encountered with this is that when restoring the CRM tennant automatically using the powershell tools, we need an accurate UserMap XML file available which contains all the users existing in our production environment. Currently whenever a new user is added to production we our automated restore fails and we have to manually update the file.
In looking to automate file creation, I was unable to find the answer to the following question: Does the UserMap.xml file need to also include disabled users?
To resolve this we produced a new UserMap.xml file from the production environment and manually checked to confirm that in face Disabled users ARE included in the UserMap.xml file.