I have the following POCO:
[Alias("Posts")]
public class Post : IReturn<Post>
{
[AutoIncrement]
[PrimaryKey]
public int PostId { get; set; }
public DateTime CreatedDate { get; set; }
[StringLength(50)]
public string CreatedBy { get; set; }
[StringLength(75)]
public string Title { get; set; }
public string Body { get; set; }
public int UpVote { get; set; }
public int DownVote { get; set; }
public bool IsPublished { get; set; }
public List<Comment> Comments { get; set; }
public List<Tag> Tags { get; set; }
}
It has a FK on my Comment and Tag entities. So I'd like to return those in my response from my service, but it says 'Invalid Column name 'Comments'' and 'Invalid Column name 'Tags'' . How do I see which Comments and Tags are attached to my Post, with ORM Lite? In EF I would simply use Include to lazy load my related table information, whats the equivalent?
Edit
In response to the answers, I've done this:
public class PostFull
{
public Post Post { get; set; }
public List<Comment> Comments { get; set; }
public List<Tag> Tags { get; set; }
}
Then in my service, I return this, my entity PostTag is an intersection entity as my Post and Tag entities are a M:M relationship:
var posts = Db.Select<Post>().ToList();
var fullPosts = new List<PostFull>();
posts.ForEach(delegate(Post post)
{
var postTags = Db.Select<PostTag>(x => x.Where(y => y.PostId ==
post.PostId)).ToList();
fullPosts.Add(new PostFull()
{
Post = post,
Tags = Db.Select<Tag>(x => x.Where(y => postTags.Select(z =>
z.TagId).Contains(y.TagId))).ToList(),
Comments = Db.Select<Comment>(x => x.Where(y => y.PostId ==
post.PostId)).ToList()
});
});
return fullPosts;
Not sure whether its a good design pattern or not?
Edit 2
Here are my entities:
[Alias("Tags")]
public class Tag
{
[AutoIncrement]
[PrimaryKey]
public int TagId { get; set; }
[StringLength(50)]
public string Name { get; set; }
}
[Alias("Posts")]
public class Post
{
[AutoIncrement]
[PrimaryKey]
public int PostId { get; set; }
public DateTime CreatedDate { get; set; }
[StringLength(50)]
public string CreatedBy { get; set; }
[StringLength(75)]
public string Title { get; set; }
public string Body { get; set; }
}
[Alias("PostTags")]
public class PostTag
{
[AutoIncrement]
[PrimaryKey]
public int PostTagId { get; set; }
[References(typeof(Post))]
public int PostId { get; set; }
[References(typeof(Tag))]
public int TagId { get; set; }
}
Tables in OrmLite are strictly a 1:1 mapping with the underlying db tables.
This means all complex type properties are blobbed into a db text field with the property name, they're never used to auto-map to child relations as you're expecting to do here.
Here's an early answer that shows how you could map many to many relations with OrmLite.
Try to avoid N+1 queries, remember that every call to Db.x is a remote DB query so you should ideally try to avoid any Database calls in a loop.
Retrieving Posts by Many to Many Table query
You can use OrmLite's support for JOINs to construct a Typed query as you would in normal SQL to query by the Many to Many table and find all posts with the specified Tag:
Create and Populate Posts with Test Data
db.CreateTable<Post>();
db.CreateTable<Tag>();
db.CreateTable<PostTag>();
var post1Id = db.Insert(new Post {
CreatedBy = "gistlyn", Title = "Post 1", Body = "Body 1" }, selectIdentity:true);
var post2Id = db.Insert(new Post {
CreatedBy = "gistlyn", Title = "Post 2", Body = "Body 2" }, selectIdentity:true);
db.Insert(new Tag { Id = 1, Name = "A" },
new Tag { Id = 2, Name = "B" });
db.Insert(new PostTag { PostId = post1Id, TagId = 1 },
new PostTag { PostId = post1Id, TagId = 2 });
db.Insert(new PostTag { PostId = post2Id, TagId = 1 });
Create a SQL Expression Joining all related tables:
When following OrmLite's normal naming conventions above, OrmLite can infer the relationship between each table saving you from specifying the JOIN expression for each query, e.g:
var postsWithTagB = db.Select(db.From<Post>()
.Join<PostTag>()
.Join<PostTag,Tag>()
.Where<Tag>(x => x.Name == "B"));
postsWithTagB.PrintDump();
Where this Query returns just the first Post for Tag B and both Posts for Tag A.
You can further explore this stand-alone example online by running it Live on Gistlyn.
Populating all Posts with Tags and Comments
If this is a small blog and you want to load all the posts with their related tags and comments e.g. in a home page or RSS feed you can load the entire dataset in memory with 4 queries using Linq2Objects to join them with something like:
//Only 4 DB calls to read all table data
var posts = Db.Select<Post>();
var postTags = Db.Select<PostTag>();
var tags = Db.Select<Tag>();
var comments = Db.Select<Comment>();
//using Linq2Objects to stitch the data together
var fullPosts = posts.ConvertAll(post =>
{
var postTagIds = postTags
.Where(x => x.PostId == post.PostId)
.Select(x => x.PostTagId).ToList();
return new PostFull {
Post = post,
Tags = tags.Where(x => postTagIds.Contains(x.TagId)).ToList(),
Comments = comments.Where(x => x.PostId == post.PostId).ToList(),
};
});
You don't have to include Tags and Comments to the Post entity. In your case your DTO and DB model classes should be different. In Tag and Comment classes you should have PostId property.
In service you should query for Comments where PostId equals your Post Id and do the same for Tags. The results should be added to your Post DTO containing lists of comments and tags.
Related
I am trying to patch a object with the following code.
public object Patch(EditBlog request)
{
using (var db = _db.Open())
{
try
{
request.DateUpdated = DateTime.Now;
Db.Update<Blog>(request, x => x.Id == request.Id);
return new BlogResponse { Blog = Blog = Db.Select<Blog>(X=>X.Id == request.Id).SingleOrDefault()};
}
catch (Exception e)
{
return HttpError.Conflict("Something went wrong");
}
}
}
In Postman, I am calling the function like this "api/blog/1?=Title=Test1&Summary=Test&UserId=1".
When debugging I can see that those values has been assigned to the request.
During the Update it throws: "Cannot update identity column 'Id'"
My model looks like this
public class Blog
{
[AutoIncrement]
public int Id { get; set; }
public IUserAuth User { get; set; }
[Required]
public int UserId { get; set; }
[Required]
public string Title { get; set; }
public string Summary { get; set; }
public string CompleteText { get; set; }
[Required]
public DateTime DateAdded { get; set; }
public DateTime DateUpdated { get; set; }
}
And the EditBlog DTO looks like this:
[Route("/api/blog/{id}", "PATCH")]
public class EditBlog : IReturn<BlogResponse>
{
public int Id { get; set; }
public IUserAuth User { get; set; }
public int UserId { get; set; }
public string Title { get; set; }
public string Summary { get; set; }
public string CompleteText { get; set; }
public DateTime DateUpdated { get; set; }
}
The error message "Cannot update identity column 'Id'" does not exist anywhere in ServiceStack.OrmLite, it could be an error returned by the RDBMS when you're trying to update the Primary Key which OrmLite wouldn't do when updating a Model annotated with a Primary Key like your Blog class has with its annotated [AutoIncrement] Id Primary Key.
The error is within your Db.Up<T> method that's performing the update, which is not an OrmLite API, so it's likely your own custom extension method or an alternative library.
I would implement a PATCH Request in OrmLite with something like:
var blog = request.ConvertTo<Blog>();
blog.DateUpdated = DateTime.Now;
Db.UpdateNonDefaults(blog);
i.e. using OrmLite's UpdateNonDefaults API to only update non default fields and updating using the Blog Table POCO not the EditBlog Request DTO.
Also you should use the Single APIs when fetching a single record, e.g:
Blog = Db.SingleById<Blog>(request.Id)
or
Blog = Db.Single<Blog>(x => x.Id == request.Id)
Instead of:
Blog = Db.Select<Blog>(X=>X.Id == request.Id).SingleOrDefault()
We have a DTO - Employee - with many (> 20) related DTOs and DTO collections. For "size of returned JSON" reasons, we have marked those relationships as [Ignore]. It is then up to the client to populate any related DTOs that they would like using other REST calls.
We have tried a couple of things to satisfy clients' desire to have some related Employee info but not all:
We created a new DTO - EmployeeLite - which has the most-requested fields defined with "RelatedTableNameRelatedFieldName" approach and used the QueryBase overload and that has worked well.
We've also tried adding a property to a request DTO - "References" - which is a comma-separated list of related DTOs that the client would like populated. We then iterate the response and populate each Employee with the related DTO or List. The concern there is performance when iterating a large List.
We're wondering if there a suggested approach to what we're trying to do?
Thanks for any suggestions you may have.
UPDATE:
Here is a portion of our request DTO:
[Route("/employees", "GET")]
public class FindEmployeesRequest : QueryDb<Employee> {
public int? ID { get; set; }
public int[] IDs { get; set; }
public string UserID { get; set; }
public string LastNameStartsWith { get; set; }
public DateTime[] DateOfBirthBetween { get; set; }
public DateTime[] HireDateBetween { get; set; }
public bool? IsActive { get; set; }
}
There is no code for the service (automagical with QueryDb), so I added some to try the "merge" approach:
public object Get(FindEmployeesRequest request) {
var query = AutoQuery.CreateQuery(request, Request.GetRequestParams());
QueryResponse<Employee> response = AutoQuery.Execute(request, query);
if (response.Total > 0) {
List<Clerkship> clerkships = Db.Select<Clerkship>();
response.Results.Merge(clerkships);
}
return response;
}
This fails with Could not find Child Reference for 'Clerkship' on Parent 'Employee'
because in Employee we have:
[Ignore]
public List<Clerkship> Clerkships { get; set; }
which we did because we don't want "Clerkships" with every request. If I change [Ignore] to [Reference] I don't need the code above in the service - the List comes automatically. So it seems that .Merge only works with [Reference] which we don't want to do.
I'm not sure how I would use the "Custom Load References" approach in an AutoQuery service. And, AFAIKT, the "Custom Fields" approach can't be use for related DTOs, only for fields in the base table.
UPDATE 2:
The LoadSelect with include[] is working well for us. We are now trying to cover the case where ?fields= is used in the query string but the client does not request the ID field of the related DTO:
public partial class Employee {
[PrimaryKey]
[AutoIncrement]
public int ID { get; set; }
.
.
.
[References(typeof(Department))]
public int DepartmentID { get; set; }
.
.
.
public class Department {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
.
.
.
}
So, for the request
/employees?fields=id,departmentid
we will get the Department in the response. But for the request
/employees?fields=id
we won't get the Department in the response.
We're trying to "quietly fix" this for the requester by modifying the query.SelectExpression and adding , "Employee"."DepartmentID" to the SELECT before doing the Db.LoadSelect. Debugging shows that query.SelectExpression is being modified, but according to SQL Profiler, "Employee"."DepartmentID" is not being selected.
Is there something else we should be doing to get "Employee"."DepartmentID" added to the SELECT?
Thanks.
UPDATE 3:
The Employee table has three 1:1 relationships - EmployeeType, Department and Title:
public partial class Employee {
[PrimaryKey]
[AutoIncrement]
public int ID { get; set; }
[References(typeof(EmployeeType))]
public int EmployeeTypeID { get; set; }
[References(typeof(Department))]
public int DepartmentID { get; set; }
[References(typeof(Title))]
public int TitleID { get; set; }
.
.
.
}
public class EmployeeType {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
}
public class Department {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
[Reference]
public List<Title> Titles { get; set; }
}
public class Title {
[PrimaryKey]
public int ID { get; set; }
[References(typeof(Department))]
public int DepartmentID { get; set; }
public string Name { get; set; }
}
The latest update to 4.0.55 allows this:
/employees?fields=employeetype,department,title
I get back all the Employee table fields plus the three related DTOs - with one strange thing - the Employee's ID field is populated with the Employee's TitleID values (I think we saw this before?).
This request fixes that anomaly:
/employees?fields=id,employeetypeid,employeetype,departmentid,department,titleid,title
but I lose all of the other Employee fields.
This sounds like a "have your cake and eat it too" request, but is there a way that I can get all of the Employee fields and selective related DTOs? Something like:
/employees?fields=*,employeetype,department,title
AutoQuery Customizable Fields
Not sure if this is Relevant but AutoQuery has built-in support for Customizing which fields to return with the ?fields=Field1,Field2 option.
Merge disconnected POCO Results
As you've not provided any source code it's not clear what you're trying to achieve or where the inefficiency with the existing solution lies, but you don't want to be doing any N+1 SELECT queries. If you are, have a look at how you can merge disconnected POCO results together which will let you merge results from separate queries based on the relationships defined using OrmLite references, e.g the example below uses 2 distinct queries to join Customers with their orders:
//Select Customers who've had orders with Quantities of 10 or more
List<Customer> customers = db.Select<Customer>(q =>
q.Join<Order>()
.Where<Order>(o => o.Qty >= 10)
.SelectDistinct());
//Select Orders with Quantities of 10 or more
List<Order> orders = db.Select<Order>(o => o.Qty >= 10);
customers.Merge(orders); // Merge disconnected Orders with their related Customers
Custom Load References
You can selectively control which references OrmLite should load by specifying them when you call OrmLite's Load* API's, e.g:
var customerWithAddress = db.LoadSingleById<Customer>(customer.Id,
include: new[] { "PrimaryAddress" });
Using Custom Load References in AutoQuery
You can customize an AutoQuery Request to not return any references by using Db.Select instead of Db.LoadSelect in your custom AutoQuery implementation, e.g:
public object Get(FindEmployeesRequest request)
{
var q = AutoQuery.CreateQuery(request, Request);
var response = new QueryResponse<Employee>
{
Offset = q.Offset.GetValueOrDefault(0),
Results = Db.Select(q),
Total = (int)Db.Count(q),
};
return response;
}
Likewise if you only want to selectively load 1 or more references you can change LoadSelect to pass in an include: array with only the reference fields you want included, e.g:
public object Get(FindEmployeesRequest request)
{
var q = AutoQuery.CreateQuery(request, Request);
var response = new QueryResponse<Employee>
{
Offset = q.Offset.GetValueOrDefault(0),
Results = Db.LoadSelect(q, include:new []{ "Clerkships" }),
Total = (int)Db.Count(q),
};
return response;
}
Am Using EntityFramework codefirst approach.my coding is
class Blog
{
[Key]
public int BlobId { get; set; }
public string Name { get; set; }
public virtual List<Post> Posts { get; set; }
}
class Post
{
[Key]
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public int BlobId { get; set; }
public virtual Blog Blob { get; set; }
}
class BlogContext:DbContext
{
public BlogContext() : base("constr") { }
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}
class Program
{
static void Main(string[] args)
{
using (var db = new BlogContext())
{
Console.WriteLine("Enter a name for a new blob:");
var name = Console.ReadLine();
var b = new Blog { Name = name };
db.Blogs.Add(b);
db.SaveChanges();
Till this step i created two tables(Blogs and Posts)in my SQlserver.The BlobId is primary key in Blogs table.and foreign key in Posts table.and Blogid in blog table is auto incremented.postid in posts table is also auto incremented
class Program
{
static void Main(string[] args)
{
using (var db = new BlogContext())
{
Console.WriteLine("Enter a name for a new blob:");
var name = Console.ReadLine();
var b = new Blog { Name = name };
db.Blogs.Add(b);
db.SaveChanges();
Here i added name in the blogtable
var id1 = from val in db.Blogs
where val.Name == name
select val.BlobId;
Now by using Name am obtaining the blogid of blogs table
Console.WriteLine("Enter Title:");
var title = Console.ReadLine();
Console.WriteLine("Enter Content");
var content = Console.ReadLine();
var c = new Post { Title = title, Content = content, BlobId = id1};
db.Posts.Add(c);
db.SaveChanges();
here am reading the data for title,content.Then adding the title,content and blogid(which i obtained from another table) into Posts table
I getting error at BlobId = id1
Am getting Cannot implicitly convert type 'System.Linq.IQueryable' to 'int' this error
}
Console.ReadLine();
}
}
Can you help me to solve this.If you did not understand what i explained please reply me
The following query is a sequence of elements, not a scalar value, even though you believe that there is only one result, it is still a collection with one element when the results of the query are iterated over:
var id1 = from val in db.Blogs
where val.Name == name
select val.BlobId;
Change this to:
int id1 = (from val in db.Blogs
where val.Name == name
select val.BlobId).First();
This query will execute immediately and return the first element in the sequence. It will throw an exception if there is no match, so you may want to use FirstOrDefault and assign to a nullable int instead.
Update: Here's a gist that more fully demonstrates the issue https://gist.github.com/pauldambra/5051550
Ah, more update... If I make the Id property on the Mailing class a string then it all works. Should I just give up on integer ids?
I have 2 models
public class Mailing
{
public int Id { get; set; }
public string Sender { get; set; }
public string Subject { get; set; }
public DateTime Created { get; set; }
}
public class Recipient
{
public Recipient()
{
Status = RecipientStatus.Pending;
}
public RecipientStatus Status { get; set; }
public int MailingId { get; set; }
}
On my home page I want to grab the last 10 mailings. With a count of their recipients (eventually with a count of different status recipients but...)
I have made the following index
public class MailingWithRecipientCount : AbstractMultiMapIndexCreationTask<MailingWithRecipientCount.Result>
{
public class Result
{
public int MailingId { get; set; }
public string MailingSubject { get; set; }
public string MailingSender { get; set; }
public int RecipientCount { get; set; }
}
public MailingWithRecipientCount()
{
AddMap<Mailing>(mailings => from mailing in mailings
select new
{
MailingId = mailing.Id,
MailingSender = mailing.Sender,
MailingSubject = mailing.Subject,
RecipientCount = 0
});
AddMap<Recipient>(recipients => from recipient in recipients
select new
{
recipient.MailingId,
MailingSender = (string) null,
MailingSubject = (string)null,
RecipientCount = 1
});
Reduce = results => from result in results
group result by result.MailingId
into g
select new
{
MailingId = g.Key,
MailingSender = g.Select(m => m.MailingSender)
.FirstOrDefault(m => m != null),
MailingSubject = g.Select(m => m.MailingSubject)
.FirstOrDefault(m => m != null),
RecipientCount = g.Sum(r => r.RecipientCount)
};
}
}
I query using
public ActionResult Index()
{
return View(RavenSession
.Query<RavenIndexes.MailingWithRecipientCount.Result, RavenIndexes.MailingWithRecipientCount>()
.OrderByDescending(m => m.MailingId)
.Take(10)
.ToList());
}
And I get:
System.FormatException: System.FormatException : Input string was not
in a correct format. at System.Number.StringToNumber(String str,
NumberStyles options, NumberBuffer& number, NumberFormatInfo info,
Boolean parseDecimal)
Any help appreciated
Yes, integer ids are a pain. This is mainly because Raven always stores a full string document key, and you have to think about when you are using the key or your own id and translate appropriately. When reducing, you also need to align the int and string data types.
The minimum to get your test to pass is:
// in the "mailings" map
MailingId = mailing.Id.ToString().Split('/')[1],
// in the reduce
MailingId = g.Key.ToString(),
However - you could make your index a whole lot smaller and perform better by taking the sender and subject strings out of it. You can just put them in with a transform.
Here is a simplified complete index that does the same thing.
public class MailingWithRecipientCount : AbstractIndexCreationTask<Recipient, MailingWithRecipientCount.Result>
{
public class Result
{
public int MailingId { get; set; }
public string MailingSubject { get; set; }
public string MailingSender { get; set; }
public int RecipientCount { get; set; }
}
public MailingWithRecipientCount()
{
Map = recipients => from recipient in recipients
select new
{
recipient.MailingId,
RecipientCount = 1
};
Reduce = results => from result in results
group result by result.MailingId
into g
select new
{
MailingId = g.Key,
RecipientCount = g.Sum(r => r.RecipientCount)
};
TransformResults = (database, results) =>
from result in results
let mailing = database.Load<Mailing>("mailings/" + result.MailingId)
select new
{
result.MailingId,
MailingSubject = mailing.Subject,
MailingSender = mailing.Sender,
result.RecipientCount
};
}
}
As an aside, did you know about the RavenDB.Tests.Helpers package? It provides a simple base class RavenTestBase that you can inherit from that does most all of the legwork for you.
using (var store = NewDocumentStore())
{
// now you have an initialized, in-memory, embedded document store.
}
Also - you probably shouldn't scan the assembly for indexes in a unit test. You might introduce indexes that weren't part of what you were testing. The better route is to create the index indvidually, like this:
documentStore.ExecuteIndex(new MailingWithRecipientCount());
I am a newbie with Entity Framework and I need to insert an object Comment that has a related FK object User into the database.
public Class Comment
{
public int CommentID { get; set; }
public string CommentContent { get; set; }
public virtual User User { get; set; }
public virtual DateTime CommentCreationTime { get; set; }
}
public class User
{
public int UserID { get; set; }
public string UserName { get; set; }
public string UserPassword { get; set; }
public string UserImageUrl{get; set;}
public DateTime UserCreationDate { get; set; }
public virtual List<Comment> Comments { get; set; }
}
public void AddComment()
{
User user = new User() { UserID = 1 };
Comment comment = new Comment() { CommentContent = "This is a comment", CommentCreationTime = DateTime.Now, User = user };
var ctx = new WallContext();
comments = new CommentsRepository(ctx);
comments.AddComment(comment);
ctx.SaveChanges();
}
Ideally, with T-SQL, if I know the PRIMARY KEY of my User object, I could just insert my Comment object and specify the PK of my 'User' in the insert statement.
I have tried to do the same with Entity Framework and it doesn't seem to work. It would be overkill to have to first fetch the User object from the database just to insert a new 'Comment'.
Please, how can I achieve this ?
You need to attach the user object to the context so that the context knows its an existing entity
public void AddComment()
{
var ctx = new WallContext();
User user = new User() { UserID = 1 };
ctx.Users.Attach(user);
Comment comment = new Comment() { CommentContent = "This is a comment", CommentCreationTime = DateTime.Now, User = user };
comments = new CommentsRepository(ctx);
comments.AddComment(comment);
ctx.SaveChanges();
}