I need to define a unique index over two columns (link columns in a junction table) in Orchard's migrations file. I've found a similar question, but it's unanswered, except for a comment stating that adding a unique index after creation of the table is prohibited. However, my problem is not this - I will rebuild my tables before the database goes live anyway, so I do everything in the Create() method. I tried few variants of what I found and I always got syntax errors, so I guess Orchard uses (slightly?) different syntax than Ruby on Rails.
I don't consider making the indexes in the SQL Server database (dirty - Orchard is likely to get confused by this) and I don't like checking the uniqueness in controllers/services (I already made quite a lot of code, hard to maintain and probably slow, and just found another dupe).
EDIT: I found that there are no foreign keys in the database. Combined with the Orchard guide to foreign keys, it seems that Orchard prefers doing things in the code only, bypassing the strong points of databases such as proper foreign keys and multiple primary keys (I know they were discouraged somewhere, otherwise I would try go this way from the start). However, as someone with more SQL than ordinary programming experience, I would prefer to exploit proper keys and indexes as much as possible, unless heavily "non-orchardy". If avoiding the database tools has some good reason, please explain why, and sketch the Orchard way of assuring uniqueness of junction table records.
What I tried:
SchemaBuilder.CreateTable(typeof(FooBarRecord).Name,
table => table
.Column("Id", column => column.PrimaryKey().Identity())
.Column("Foo_Id", column => column.NotNull().Unique())
.Column("Bar_Id", column => column.NotNull().Unique())
);
This is not what I want - I need to connect each foo with several rows of bar and vice versa. So I tried the ruby-on-rails solution:
add_index :FooBarRecord, [:Foo_Id,:Bar_Id], :unique => true
and it returned syntax errors at most columns and brackets. The same when I wrapped it in an AlterTable:
SchemaBuilder.AlterTable(FooBarRecord,
table => table
add_index [:Foo_Id, :Bar_Id], :unique => true
);
The same when I formatted the table name differently:
SchemaBuilder.AlterTable(typeof(FooBarRecord).Name,
table => table
add_index [:Foo_Id, :Bar_Id], :unique => true
);
Here is the model:
public class FooBarRecord
{
public virtual int Id { get; set; }
public virtual RoleRecord Role { get; set; }
public virtual EvidenceRecord Evidence { get; set; }
public virtual bool EditPermission { get; set; }
}
Related
Suppose I create a model
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
}
I then deploy a service that periodically updates the values of OriginalProperty with code similar to...
//use model-based query
var query = new TableQuery<Foo>().Where(…);
//get the (one) result
var row= (await table.ExecuteQueryAsync(query)).Single()
//modify and write it back
row.OriginalProperty = some_new_value;
await table.ExecuteAsync(TableOperation.InsertOrReplace(row));
At some later time I decide I want to add a new property to Foo for use by a different service.
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
public int NewProperty {get;set;}
}
I make this change locally and start updating a few records from my local machine without updating the original deployed service.
The behaviour I am seeing is that changes I make to NewProperty from my local machine are lost as soon as the deployed service updates the record. Of course this makes sense in some ways. The service is unaware that NewProperty has been added and has no reason to preserve it. However my understanding was that the TableEntity implementation was dictionary-based so I was hoping that it would 'ignore' (i.e. preserve) newly introduced columns rather than delete them.
Is there a way to configure the query/insertion to get the behaviour I want? I'm aware of DynamicTableEntity but it's unclear whether using this as a base class would result in a change of behaviour for model properties.
Just to be clear, I'm not suggesting that continually fiddling with the model or having multiple client models for the same table is a good habit to get into, but it's definitely useful to be able to occasionally add a column without worrying about redeploying every service that might touch the affected table.
You can use InsertOrMerge instead of InsertOrReplace.
Our DBA don't want us to use double quoted fields and tables in our queries (don't ask me the reason)... the problem is that ServiceStack.OrmLite double quote them all, and I don't have any idea on how disable this behaviour. We are using ServiceStack.OrmLite Version 4.5.4.0.
For example:
public class ClassA {
public int ID {get;set;}
public string Name {get;set;}
}
If we make a simple query like:
using (IDbConnection db = dbFactory.Open())
{
return db.LoadSingleById<ClassA>(id);
}
would generate:
select "ID", "Name" from "ClassA" where "ID" = #0
And this is what our dba want:
select ID, Name from ClassA where ID = #0
If anybody could help, I would apreciate a lot
PS I know I can write myself all queries, but there are too much code to change, so I'm trying to avoid this solution because it's too much time consuming at the moment.
Based on my inspection of the source code, it appears that this cannot be changed out of the box.
When ORMLite builds its query, it grabs the column name and wraps it in quotation marks. See here: https://github.com/ServiceStack/ServiceStack.OrmLite/blob/master/src/ServiceStack.OrmLite/OrmLiteDialectProviderBase.cs#L384
An alternative would be to create a new OrmLiteDialectProvider that inherits whichever provider you are using (e.g., SQL Server, Oracle, etc), and override one of the following methods:
GetQuotedColumnName(string columnName)
GetQuotedName(string name)
Overriding either of those to exclude the quotation marks would get you what you're looking for.
I have created an MVC app in which I renamed a model class from "Diplomata" to "Diplomas" and now I can't make the migrations to create a table with name "Diplomas", because they still use the old name for some reason. (using .NET Framework 4.6 and EntityFramework 6.1.2)
things I have tried so far:
dropping the db tables completely (from Visual Studio's SQL Server Object Explorer and deleting the files manually)
deleting the migration folder and re-enabling migrations
deleting the model and re-creating it (after deleting migrations and dropping the tables completely)
After enabling migrations again and using command "add-migration Initial" I get a script that generates a table with name "dbo.Diplomata"
this is the model
namespace DDS.Data.Models
{
using System.Collections.Generic;
using DDS.Data.Common.Models;
public class Diploma : BaseModel<int>
{
public string Title { get; set; }
public string Description { get; set; }
public virtual ICollection<Tag> Tags { get; set; }
}
}
this is the ApplicationDbContext
public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
public ApplicationDbContext()
: base("DefaultConnection", throwIfV1Schema: false)
{
}
...
public IDbSet<Diploma> Diplomas { get; set; }
...
}
and this is the part of the migration script that is automatically generated
public partial class Initial : DbMigration
{
public override void Up()
{
CreateTable(
"dbo.Diplomata",
c => new
{
Id = c.Int(nullable: false, identity: true),
Title = c.String(),
Description = c.String(),
...
}
Also running a search in VS2015 for "Diplomata" in the entire solution doesn't find anything.
Adding a migration that is renaming the table makes the app crash after the update, because it is searching for a table with the old name. (Invalid object name 'dbo.Diplomata')
I have been debugging this all day with no result so any ideas or suggestions for where and what to look for are appreciated.
PS: This is my first question here so if I missed something or something is hard to understand please tell me, thank you
Did you try cleaning the migrations table? When migrations are enabled, a system table called "__MigrationHistory" is generated. You can locate such table from within SQL Server Management Studio. Go to YourDatabase->Tables->System Tables and it's going to be there.
The way migrations work is as follows:
An initial migration is generated with a specified name.
The initial migration is executed.
When the initial migration and subsequent migrations are run, the changes are applied to the database and a snapshot of the structure of the database it's saved in the __MigrationHIstory table as a new row.
When the application initializaes, EF will compare the latest record in the migrations table and compare that snapshot with the latest migration available, if they don't match, an exception will be thrown. This is how EF determines whether or not the Database changed.
When you are making changes to the original model, you are supposed to create subsequent migration files so you can revert or apply database changes. If you already deleted the initial migration file, probably your best option will be to clean the __MigrationHistory table. EF generates unique entries in the table (I believe that the default behavior is the name of the migration file + a timestamp when the migration was generated). You can always rename the initial migration file to match with the name in the __MigrationHistory table and create a new migration to apply the rename of the table in question. This will only work if the name of the files and model match the snapshots in the DB, otherwise an exception will be thrown
Take a look to this article for more information about migrations:
http://tech.trailmax.info/2014/03/inside_of_ef_migrations/
As a side note, you can also modify the default behavior oh how the migration history table is generated. This may be helpful in the event you need to support specific needs, such as renaming it, not generate it as a system table, add additional columns, etc. This will be helpful in certain scenarios. Check the following link (Particularly useful for cloud-based databases):
How do I add an additional column to the __MigrationHistory table?
NOTE: It is important to mention that tables that are not contained in the initial model or subsequent models will be excluded from the model validation. This is specially useful if you want to create tables that shouldn't be monitored by EF. I.E: The membership provider tables.
I hope this helps clarifying the problems you are having.
After few more trials and errors I gave up on fixing the problem. Instead I forced the app to created the model in a table with the name I wanted by adding Table attribute on the model. So now the model looks like this:
[Table("Diplomas")]
public class Diploma : BaseModel<int>
{
public string Title { get; set; }
public string Description { get; set; }
public virtual ICollection<Tag> Tags { get; set; }
}
Let's say I have a class (simplistic for example) and I want to ensure that the PersonId and Name fields are ALWAYS populated.
public class Person
{
int PersonId { get; set; }
string Name { get; set; }
string Address { get; set; }
}
Currently, my query would be:
Person p = conn.Query<Person>("SELECT * FROM People");
However, I may have changed my database schema from PersonId to PID and now the code is going to go through just fine.
What I'd like to do is one of the following:
Decorate the property PersonId with an attribute such as Required (that dapper can validate)
Tell dapper to figure out that the mappings are not getting filled out completely (i.e. throw an exception when not all the properties in the class are filled out by data from the query).
Is this possible currently? If not, can someone point me to how I could do this without affecting performance too badly?
IMHO, the second option would be the best because it won't break existing code for users and it doesn't require more attribute decoration on classes we may not have access to.
At the moment, no this is not possible. And indeed, there are a lot of cases where it is actively useful to populate a partial model, so I wouldn't want to add anything implicit. In many cases, the domain model is an extended view on the data model, so I don't think option 2 can work - and I know it would break in a gazillion places in my code ;p If we restrict ourselves to the more explicit options...
So far, we have deliberately avoided things like attributes; the idea has been to keep it as lean and direct as possible. I'm not pathologically opposed to attributes - just: it can be problematic having to probe them. But maybe it is time... we could perhaps also allow simple column mapping at the same time, i.e.
[Map(Name = "Person Id", Required = true)]
int PersonId { get; set; }
where both Name and Required are optional. Thoughts? This is problematic in a few ways, though - in particular at the moment we only probe for columns we can see, in particular in the extensibility API.
The other possibility is an interface that we check for, allowing you to manually verify the data after loading; for example:
public class Person : IMapCallback {
void IMapCallback.BeforePopulate() {}
void IMapCallback.AfterPopulate() {
if(PersonId == 0)
throw new InvalidOperationException("PersonId not populated");
}
}
The interface option makes me happier in many ways:
it avoids a lot of extra reflection probing (just one check to do)
it is more flexible - you can choose what is important to you
it doesn't impact the extensibility API
but: it is more manual.
I'm open to input, but I want to make sure we get it right rather than rush in all guns blazing.
I'm refactoring a project using DDD, but am concerned about not making too many Entities their own Aggregate Root.
I have a Store, which has a list of ProductOptions and a list of Products. A ProductOption can be used by several Products. These entities seem to fit the Store aggregate pretty well.
Then I have an Order, which transiently uses a Product to build its OrderLines:
class Order {
// ...
public function addOrderLine(Product $product, $quantity) {
$orderLine = new OrderLine($product, $quantity);
$this->orderLines->add($orderLine);
}
}
class OrderLine {
// ...
public function __construct(Product $product, $quantity) {
$this->productName = $product->getName();
$this->basePrice = $product->getPrice();
$this->quantity = $quantity;
}
}
Looks like for now, DDD rules as respected. But I'd like to add a requirement, that might break the rules of the aggregate: the Store owner will sometimes need to check statistics about the Orders which included a particular Product.
That means that basically, we would need to keep a reference to the Product in the OrderLine, but this would never be used by any method inside the entity. We would only use this information for reporting purposes, when querying the database; thus it would not be possible to "break" anything inside the Store aggregate because of this internal reference:
class OrderLine {
// ...
public function __construct(Product $product, $quantity) {
$this->productName = $product->getName();
$this->basePrice = $product->getPrice();
$this->quantity = $quantity;
// store this information, but don't use it in any method
$this->product = $product;
}
}
Does this simple requirement dictates that Product becomes an aggregate root? That would also cascade to the ProductOption becoming an aggregate root, as Product has a reference to it, thus resulting in two aggregates which have no meaning outside a Store, and will not need any Repository; looks weird to me.
Any comment is welcome!
Even though it is for 'reporting only' there is still a business / domain meaning there. I think that your design is good. Although I would not handle the new requirement by storing OrderLine -> Product reference. I would do something similar to what you already doing with product name and price. You just need to store some sort of product identifier (SKU?) in the order line. This identifier/SKU can later be used in a query. SKU can be a combination of Store and Product natural keys:
class Sku {
private String _storeNumber;
private String _someProductIdUniqueWithinStore;
}
class OrderLine {
private Money _price;
private int _quantity;
private String _productName;
private Sku _productSku;
}
This way you don't violate any aggregate rules and the product and stores can be safely deleted without affecting existing or archived orders. And you can still have your 'Orders with ProductX from StoreY'.
Update: Regarding your concern about foreign key. In my opinion foreign key is just a mechanism that enforces long-living Domain relationships at the database level. Since you don't have a domain relationship you don't need the enforcement mechanism as well.
In this case you need the information for reporting which has nothing to do with the aggregate root.
So the most suitable place for it would be a service (could be a domain service if it is related to business or better to application service like querying service which query the required data and return them as DTOs customizable for presentation or consumer.
I suggest you create a statistics services which query the required data using read only repositories (or preferable Finders) which returns DTOs instead of corrupting the domain with query models.
Check this