The mapping and metadata information for EntityContainer 'DBContext' no longer matches the information used to create the pre-generated views - entity-framework-5

This one was build before in MVC4.0 and EF4.0 Code First. Now it has been migrated in MVC5.0 AND EF5.0 Code First. During data migration it is getting the exception.

Related

Create Data Factory Dataset in a specific Azure DevOps branch rather than directly in Data Factory

While trying to build an ADF pipeline that generates datasets within Data Factory, I ran into an interesting issue. Or maybe I misunderstand some components completely, in which case I'd happily be educated.
I basically read some meta data from a SQL Database table which determines which source system, schema and tables I should pull new data from. The meta data is stored within a bunch of variables, which then feed a Web Request that attempts to generate a new Data Source as per the MS documentation. Yes, I'm trying to use Azure Data Factory to generate Azure Data Factory components.
The URL to create the DataSet and the JSON Body for the request are both generated using #Concat and a number of the variables. The resulting DataSet is a very straightforward file that does not contain references to the columns, but just the table schema and table name. I generated these manually before, and that all seems to work brilliantly. I basically have a dataset connected to the source system, referincing the table from the meta data.
The code runs, but the resulting dataset is directly published, as opposed to being added in my working branch. While this should not be a big issue once I manage to properly test everything, ideally the object would be created in my working branch (using Azure DevOps, thus a local file).
My next thought was to set up a linked service to my local PC, and simply write the same contents as above there. My challenge seems to be that I essentially am creating a file out of nothing. I am trying to use a Copy Data component, and added an empty placeholder file to act as a source.
I configure the sink with Dynamic Content for Copy Behavior, and attempted to add the JSON contents there. This gets the file created, but it's unfortunately empty. I also attempted to add a new column to the source with the data being the same contents.
However, seeing the file to be used as a sink doesn't exist, a mapping error will occur. Apart from this, I'd not want a column header to be written; just the dynamically created contents.
I'm not sure how to continue with this. I feel I'm very close to achieving my goal, but cannot seem to take this final hurdle.
Any hints or suggestions would be very welcome.

Proper procedure for modifying JHipster Entities using JDL

I am trying out JHipster (version 6.4.1) using a Monolith and disk-based H2 database. I have created some entities in JDL and have the basic CRUD webpages working. Now that I feel comfortable with the process, I want to add fields and rename others. I figured I could simply update the JDL, re-import the JDL, start the application, and see the result of my changes. What I see is ValidationFailedException from Liquibase and the application throwing HTTP 500 errors due to database problems.
I have looked all over for guidance on the proper process for handling this seemingly common development scenario. Most of the places I have looked for guidance (such as https://www.jhipster.tech/creating-an-entity) discuss importing JDL as a one-time-only operation and do not discuss how to incrementally change and import the JDL.
I have tried a number of suggestions as seen on SO, such as not overwriting the changelogs, doing a liquibase:diff, and adding that to master.xml. This still causes the ValidationFailedException. In the master.xml I see the comment <!-- jhipster-needle-liquibase-add-changelog - JHipster will add liquibase changelogs here --> which leads me to believe that JHipster should be doing the heavy lifting, but I am just missing a step.
I am by no means a JHipster nor a Liquibase expert, but I want to learn. How I can perform simple entity updates without a huge hassle?
[Update with more detail]
After re-importing the updated JDL, I have managed to get rid of the DB Validation Errors by blowing away the database with rm -rf target/h2db/db.
I'm happy with my changes and feel like a commit is in order. What I see is
master.xml is unchanged
the changelog from the first JDL import has been modified to include the updates I made
If I understand how liquibase works, I would have expected
None of the existing changelogs would be touched
A brand new changelog file would be created that contained just the JDL changes I made this round
master.xml to have changed only in that it would contain an additional changelog entry, pointing to the file created in item 2
Am I misinterpreting how Liquibase represents evolution of the DB schema?
It appears that the page you referenced does have some instructions for updating entities. Farther down the page I saw this:
Updating an existing entity
The entity configuration is saved in a
specific .json file, in the .jhipster directory. So if you run the
sub-generator again, using an existing entity name, you can update or
regenerate the entity.
When you run the entity sub-generator for an existing entity, you will
be asked a question ‘Do you want to update the entity? This will
replace the existing files for this entity, all your custom code will
be overwritten’ with following options:
Yes, re generate the entity - This will just regenerate your entity.
Tip: This can be forced by passing a --regenerate flag when running
the sub-generator
Yes, add more fields and relationships - This will
give you questions to add more fields and relationships
Yes, remove fields and relationships - This will give you questions to remove
existing fields and relationships from the entity
No, exit - This will exit the sub-generator without changing anything
You might want to
update your entity for the following reasons:
You want to add/remove fields and relationships to an existing entity
You want to reset your entity code to its original state
You have updated JHipster, and would like to have your entity generated with
the new templates
You have modified the .json configuration file (the
format is quite close to the questions asked by the generator, so it’s
not very complicated), so you can have a new version of your entity
You have copy/pasted the .json file, and want a new entity that is
very close to the copied entity

Updating dbcopy database when parent MapReduce View Changes

I have a database called "development-records" that has a MapReduce view with a "dbcopy" declaration that creates a view in a new database called "development-chained".
When we make an update the view in "development-records", we do the usual steps of:
1. Create a duplicate copy of the design document that we want to change, for example by adding _OLD to its name: _design/fetch_OLD.
2. Put the new or 'incoming' design document into the database, using a name with the suffix _NEW: _design/fetch_NEW.
3. Query the fetch_NEW view, to ensure that it starts to build.
4. Poll the _active_tasks endpoint and wait until the index has finished building.
5. Put a duplicate copy of the new design document into _design/fetch.
6. Delete Design Document _design/fetch_NEW.
7. Delete Design Document _design/fetch_OLD.
The problem is that the documents specified in the dbcopy database "development-chained" don't seem to be updated -- all the old records stay. Is there a way to trigger the dbcopy database to perform the MapReduce again?
Unfortunately, according to the official Cloudant documentation, "The dbcopy feature can cause problems under some circumstances." Use of this feature is strongly discouraged, and has otherwise been removed from the documentation. I hope knowing that helps a little. The new documentation is hard to find.

Pulling custom field into Rally Query

I am creating a Query using the Rally/Excel plug-in. I am creating the report with a base type of Task but want to include User Story information in the Query.
I have been able to do this before by adding "WorkProduct.Release" into the columns listing. That works no problem. When I attempt this with a custom field named "CR#" I get no contents being returned.
I am able to pull custom fields from the Task itself without issue it just appears to be an issue when pulling from the parent object.
I have verified the field name and that the content is actually populated. Does anyone know a way to pull this data via the excel plug-in or if there is a limitation with pulling custom field information from a parent?
In Web Serivces API Workproduct attribute is Artifact. Artifact is a parent of Task, HierarchicalRequirement (user story) and Defects, and other work item types. Those types can have custom fields created on them, but the parent Artifact is not aware of them. It is not possible to traverse from Artifact to a custom field, and it should also not be possible to traverse to Iteration or Release from Artifact. Those fields do not exist on Artifact object in the API. It is possible to traverse Workproduct.FormattedID because FormattedID attribute exists on Artifact. That's where work item types inherit the FormattedID from. If I use Workproduct.Release or Workproduct.Iteration in Excel plugin in a query on a Task object following this syntax:
(Workproduct.Iteration = /iteration/12352898163)
I get this error:
(Workproduct.Iteration.Name = it123)
will produce a similar error.
I put this to the Rally Support folks and got the following answer, so the short answer is no...can't be done:
When you query using WorkProduct.FormattedID on a task, the data can
be returned because that field is part of "Artifact". You can see
this by looking at the Web Services API information, which I have
included some screenshots to illustrate this. The custom field you
are trying to query doesn't reside on Artifact, so is not found by the
query.
The actual work product that has your custom field would be either a
defect or a story, but the Task object does not reference back to that
to allow you to query.
You could do another query for the different work products and include
the custom field, then combine the two worksheets.

Custom Metadatafield in Document and Web content in Liferay

I want a metadata field getting values from database record. This metadata field should be added to document.
Can anyone provide a solution to my requirement.??
I presume you are using Liferay 6.1.
Web Content Structures
As for Web Content, you could programmatically create a JournalStructure (see JournalStuctureLocalServiceUtil) and populate the list of possible values for your structure field with values coming out of the database. You can put this "import code" inside a batch job, so your structure field and the values inside the external database are always in sync.
Document Metadata
How to do this with Metadata Sets is probably more interesting, as not only Dynamic Data Lists and Documents & Media use this in Liferay 6.1; as of 6.2, Web Content structures will utilize the same metadata API in favor of the old Journal API.
For this to implement, check out the xsd column of the DDMStructure table. It has more or less the same format as the XML for a JournalStructure, however there are more options available. Use DDMStructureLocalServiceUtil#addStructure to add such a new structure. Again, run this inside a batch so you always have the latest external DB values.

Resources