Im fairly new to ASP.NET MVC 3, and to coding in general really.
I have a very very small application i want to upload to my webhosting domain.
I am using entity framework, and it works fine on my local machine.
I've entered a new connection string to use my remote database instead however it dosen't really work, first of all i have 1 single MSSQL database, which cannot be de dropped and recreated, so i cannot use that strategy in my initializer, i tried to supply null in the strategy, but to no avail, my tables simply does not get created in my database and thats the problem, i don't know how i am to do that with entity framework.
When i run the application, it tries to select the data from the database, that part works fine, i just dont know how to be able to create those tabes in my database through codefirst.
I could probaly get it to work through manually recreating the tables, but i want to know the solution through codefirst.
This is my initializer class
public class EntityInit : DropCreateDatabaseIfModelChanges<NewsContext>
{
private NewsContext _db = new NewsContext();
protected override void Seed(NewsContext context)
{
new List<News>
{
new News{ Author="Michael Brandt", Title="Test News 1 ", NewsBody="Bblablabalblaaaaa1" },
new News{ Author="Michael Brandt", Title="Test News 2 ", NewsBody="Bblablabalblaaaaa2" },
new News{ Author="Michael Brandt", Title="Test News 3 ", NewsBody="Bblablabalblaaaaa3" },
new News{ Author="Michael Brandt", Title="Test News 4 ", NewsBody="Bblablabalblaaaaa4" },
}.ForEach(a => context.News.Add(a));
base.Seed(context);
}
}
As i said, im really new to all this, so excuse me, if im lacking to provide the proper information you need to answer my question, just me know and i will answer it
Initialization strategies do not support upgrade strategies at the moment.
Initialization strategies should be used to initialise a new database. all subsequent changes should be done using scripts at the moment.
the best practice as we speak is to modify the database with a script, and then adjust by hand the code to reflect this change.
in future releases, upgrade / migration strategies will be available.
try to execute the scripts statement by statement from a custom IDatabaseInitializer
then from this you can read the database version in the db and apply the missing scripts to your database. simply store a db version in a table. then level up with change scripts.
public class Initializer : IDatabaseInitializer<MyContext>
{
public void InitializeDatabase(MyContext context)
{
if (!context.Database.Exists() || !context.Database.CompatibleWithModel(false))
{
context.Database.Delete();
context.Database.Create();
var jobInstanceStateList = EnumExtensions.ConvertEnumToDictionary<JobInstanceStateEnum>().ToList();
jobInstanceStateList.ForEach(kvp => context.JobInstanceStateLookup.Add(
new JobInstanceStateLookup()
{
JobInstanceStateLookupId = kvp.Value,
Value = kvp.Key
}));
context.SaveChanges();
}
}
}
Have you tried to use the CreateDatabaseOnlyIfNotExists
– Every time the context is initialized, database will be recreated if it does not exist.
The database initializer can be set using the SetInitializer method of the Database class.If nothing is specified it will use the CreateDatabaseOnlyIfNotExists class to initialize the database.
Database.SetInitializer(null);
-
Database.SetInitializer<NewsContext>(new CreateDatabaseOnlyIfNotExists<NewsContext>());
I'm not sure if this is the exact syntax as I have not written this in a while. But it should be very similar.
If you are using a very small application, you maybe could go for SQL CE 4.0.
The bin-deployment should allow you to run SQL CE 4.0 even if your provider doesn't have the binaries installed for it. You can read more here.
That we you can actually use whatever initializer you want, since you now don't have the problem of not being able to drop databases and delete tables.
could this be of any help?
Related
I want to do a simple thing: get the database names on a RavenDB server. Looks straightforward according to the docs (https://ravendb.net/docs/article-page/4.1/csharp/client-api/operations/server-wide/get-database-names), however I'm facing a chicken-and-egg problem.
The problem comes because I want to get the database names without knowing them in advance. The code in the docs works great, but requires to have an active connection to a DocumentStore. And to get an active connection to a DocumentStore, is mandatory to select a valid database. Otherwise I can't execute the GetDatabaseNamesOperation.
That makes me think that I'm missing something. Is there any way to get the database names without having to know at least one of them?
The database isn't mandatory to open a store. Following code works with no problems:
using (var store = new DocumentStore
{
Urls = new[] { "http://live-test.ravendb.net" }
})
{
store.Initialize();
var dbs = store.Maintenance.Server.Send(new GetDatabaseNamesOperation(0, 25));
}
We send GetDatabaseNamesOperation to the ServerStore, which is common for all databases and holds common data (like database names).
What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.
This is more of a call for help with an exception when calling insertEntity().
I'm using Nodejs on Azure and editing in Monaco, and I've NPM-installed the latest version of azure storage.
The exception I encounter is: (full stack trace at the bottom)
Unaught exception: Error: Parameter entityDescriptor.PartitionKey for function entityOperation should be an object at ArgumentValidator._.extend.object
I'm basically taking my object to save, and creating 2 new properties: PartitionKey and RowKey. I give them string values. I'm following the examples. I'm not using entityGenerator, as the samples here don't, whereas the examples on the Azure Node developers portal do. I wouldn't mind using entityGenerator on the storage-specific properties if required, but the samples in the node azure github repo seem to suggest that you can use simple strings. The entityGenerator looks a bit ugly and cumbersome, honestly, as you'd have to code extra around the entity when you bring it back.
How can I adjust my code to solve this problem and call insertEntity() with success?
exports.saveTally = function(tally, callback) {
var tableSvc = getAzureTableService();
tableSvc.createTableIfNotExists("tally", function(error, result, response) {
if (!error) {
tally.PartitionKey="tally";
tally.RowKey = tally.id;
tableSvc.insertEntity("tally", tally, function(error, result, response) {
if (error) {
console.log("*Error saving tally " + error.toString());
}
else {
callback(tally.id);
}
});
}
});}
The location of the Azure storage client library has changed to https://github.com/Azure/azure-storage-node. The samples you’re using are from the old location and from an older version of the library which is why they’re not working. You’ll find updated samples and code at the new location.
In the newer version, an Edm type must be specified for each table entity. This is because type is stored in the storage service and we want to make sure that we are storing what you intend. Each table entity is an object with the form {_:value, $:Edm.Type}.
Entity generator is a convenience feature which makes it simpler to construct table entity objects. We return entities in the form just mentioned and using this convenience feature will not change that in any way.
I'm new to both CouchDB and PouchDB and am using it to create a contact management system that syncs across mobile and desktop devices, and can be used offline. I am seeing that using PouchDB is infinitely easier than having to write a PHP/MySQL backend.
I have been using it successfully, and when I make conflicting changes on offline devices, CouchDB uses an algorithm to arbitrarily pick a winner and then correctly pushes it to all the devices.
What I would like to do is implement a custom algorithm to merge conflicting records. Here is the algorithm I would like to use:
If a record is deleted on one client and merely updated on another,
the updated version wins, unless both clients agree on the delete.
The record with the most recent "modified" timestamp becomes the
master, and the older record becomes the secondary.
Any fields that exist only in the secondary (or are empty in the
master) are moved over to the master.
The master revision is saved and the secondary is deleted.
CouchDB's guide has a good explanation, but I don't have a clue how to implement it with the PouchDB API during a continuous replication. According to the PouchDB API, there is an "onChange" listener in the replicate options, but I don't understand how to use it to intercept conflicts.
If someone could write a brief tutorial including some sample code, myself and I'm sure many other PouchDB users would appreciate it!
Writing an article with examples of exactly how to manage conflict resolution is a really good idea, it can be confusing, but with the lack of one
The idea is the exact same as CouchDB, to resolve conflicts you delete the revisions that didnt win (and write a new winner if needed)
#1 is how CouchDB conflict resolution works anyway, so you dont need to worry about that, deleted leafs dont conflict
function onChange(doc) {
if (!doc._conflicts) return;
collectConflicts(doc._conflicts, function(docs) {
var master = docs.sort(byTime).unshift();
for (var doc in docs) {
for (var prop in doc) {
if (!(prop in master)) {
master[prop] = doc[prop];
}
}
}
db.put(master, function(err) {
if (!err) {
for (var doc in docs) {
db.remove(doc);
}
}
});
});
}
}
db.changes({conflicts: true, onChange: onChange});
This will need error handling etc and could be written much nicer, was just a quick napkin drawing of what the code could look like
Is there any solution to create record in other DBs from the CRM 2011 records? When a record such as "cost" was created in CRM 2011, we want a record would be created in out Oracle DB. Could it be done through a plugin? Or a service should be created for this?
Could you please provide me references or solutions for this.
Any helps would be greatly appreciated.
We had a similar request from a customer a while ago. They claimed that CRM's database wasn't to be trusted and wanted to securely store a copy of the records created in - guess what - SQL Server too. (Yes, we do understand the irony. They didn't.)
The way we've resolved it was to create a plugin. However, bear in mind that simply reacting to the message of Create won't really do. You need to set up a listener for three of the CRUD operations (retrieval doesn't affect the external database so it's rather C_UD operations, then).
Here's the skeleton of the main Execute method.
public void Execute(IServiceProvider serviceProvider)
{
Context = GetContextFromProvider(serviceProvider);
Service = GetServiceFromProvider(serviceProvider);
switch (Context.MessageName)
{
case "Create": ExecuteCreate(); break;
case "Update": ExecuteUpdate(); break;
case "Delete": ExecuteDelete(); break;
}
}
After this dispatcher, you can implement the actual calls to the other database. There are three gotchas I'd like to give you head-up on.
Remember to provide a suitable value to the outer DB when CRM doesn't offer you one.
Register the plugin as asynchronous since you'll be talking to an external resource.
Consider the problem with entity references, whether to store them recursively as well.
Walk-through for plugin construction
Link to CRM SDK if you haven't got that
Information on registering the plugin
And besides that, I've got a walk-through (including code and structure) on the subject in my blog. The URL to it, you'll have to figure out yourself - I'm not going to self-promote but it's got to do with my name and WP. Google is your friend. :)
You could use a plugin to create a record in another system, although you would need to think about syncing and ensure you don't get duplicates, but it certainly can be done.
Tutorial on plugins can be found here.
You need to write a plugin that runs on Create and uses the information on the created Cost entity to create a record in your Oracle DB.
As an example:
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
//get the created entity from CRM
var theCreatedEntity = context.InputParameters["Target"] as Entity;
//build up a stored procedure call
using (OracleConnection objConn = new OracleConnection("connection string"))
{
var cmd = new OracleCommand();
cmd.Connection = objConn;
cmd.CommandText = "stored procedure name";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("param1", OracleType.Number).Value = theCreatedEntity.GetAttributeValue<int>("Attribute1");
cmd.Parameters.Add("param2", OracleType.Number).Value = theCreatedEntity.GetAttributeValue<int>("Attribute2");
//etc
cmd.ExecuteNonQuery();
}
}
That should give you enough to get going