I have a class as below and try to use EF core framework to store the below model to Cosmos DB. But the the JSON is stored as
{
"id": "e6b75f1f-0cc2-488c-9074-62e7e85c727a",
"Type ": "testType",
"TagName ": "TagName",
"DictionaryList ": {}
}
public class TestDictionary
{
public Guid Id { get; set; }
public string Type { get; set; }
public string TagName { get; set; }
public Dictionary<string,object> DictionaryList { get; set; }
}
Problem is that the DictionaryList property doesn't save the data that is passed from api. It always stores as empty object instead of data passed through dictionary
I want my Jason to be stored in cosmos as below
{
"id": "e6b75f1f-0cc2-488c-9074-62e7e85c727a",
"Type ": "testType",
"TagName ": "TagName",
"DictionaryList ": {
“Account number”: “123456”,
“Check date”: “11/20/2020”
}
}
Only collections of primitive types are supported by EF Core 6, So DictionaryList needs to be typed as Dictionary<string,string>
Dictionary<string,object> is recognized as a nested entity type, you could use it, but you'd need to explicitly configure all the keys that it can contain:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<TestDictionary>()
.OwnsOne(t => t.DictionaryList, c =>
{
c.Property<string>("Type");
c.Property<string>("TagName");
});
}
I later realized that I could using the cosmos sdk insert Dictionary<stringm, object> without any problem, and that Ef core does expose the underlying cosmos client.
So I solved the issue by access the underlying cosmos client, and use that to insert my dictionary.
var cosmosClient = dbContext.Database.GetCosmosClient();
var container = cosmosClient.GetContainer( "db", "container");
await container.CreateItemAsync(new Dictionary<string,object>);
works like a charm.
remember the Id has to be id or atleast serializer has to be told that Id has to be interpreted id
Related
I have a table called PODetail with a primary Key of POno and ItemCode and I have the following:
[Route("/podetail/{POno}/{ItemCode}")]
public class UpdatePODetail : IReturn<PODetail> {
public string POno { get; set; }
public string ItemCode { get; set; }
public int ? QtyPend { get; set; }
public decimal ? NewPrice { get; set; }
public bool ? BackOrder { get; set; }
public string ActionCode { get; set; }
public bool ? OpenOrder { get; set; }
}
public class PODetailService : Service {
public object Any(UpdatePODetail request) {
var podetail = Db.SingleFmt<PODetail>("ItemCode = {0} AND POno = {1}", request.ItemCode, request.POno);
// var cap = new CaptureSqlFilter();
try {
Db.Update(podetail);
} catch {
// var sql = string.Join(";\n\n", cap.SqlStatements.ToArray());
}
:
:
try {
Db.Update(podetail);
} catch (Exception ex) {
string error = ex.Message;
}
return podetail;
}
}
I added the Db.Update call at the top just to check to see if there was some issue changing a column, but I get
Violation of PRIMARY KEY constraint 'aaaaaPoDetail_PK'. Cannot insert
duplicate key in object 'dbo.PODetail'.
So then I added the cap = line to see the SQL code which returns
UPDATE "PODetail" SET "NewItemCode"=#NewItemCode, "POno"=#POno, "Vendor"=#Vendor, "ActionCode"=#ActionCode, "Price"=#Price, "NewPrice"=#NewPrice, "CostPrice"=#CostPrice, "QtyOrd"=#QtyOrd, "QtyRcv"=#QtyRcv, "QtySPO"=#QtySPO, "QtyPend"=#QtyPend, "BackOrder"=#BackOrder, "OpenOrder"=#OpenOrder, "OrderDate"=#OrderDate, "InvoiceNo"=#InvoiceNo, "InvoiceVendor"=#InvoiceVendor, "InvoiceDate"=#InvoiceDate, "InvoiceDiscount"=#InvoiceDiscount, "QtyCancel"=#QtyCancel, "Qtylabels"=#Qtylabels, "REOVendor"=#REOVendor, "CurrentRcvQty"=#CurrentRcvQty, "SOPickQty"=#SOPickQty, "SOItem"=#SOItem, "QtyOther"=#QtyOther, "BackOrderCode"=#BackOrderCode WHERE "ItemCode"=#ItemCode
And then it runs fine uncommented -- no exceptions .. if I remove it it gets the Primary Key error
What is the deal -- why do I need that CaptureSqlFilter call -- or what I do I need to change so that it knows both PoNo and ItemCode are primary Keys or the update needs to say WHERE "ItemCode"=#ItemCode AND "POno"=#PONo? It almost seems as if it is trying to do an INSERT vs an UPDATE without the CaptureSqlFilter
Update 1
The documentation said :
Limitations For simplicity, and to be able to have the same POCO class
persisted in db4o, memcached, redis or on the filesystem (i.e.
providers included in ServiceStack), each model must have a single
primary key, by convention OrmLite expects it to be Id although you
use [Alias("DbFieldName")] attribute it map it to a column with a
different name or use the [PrimaryKey] attribute to tell OrmLite to
use a different property for the primary key.
You can still SELECT from these tables, you will just be unable to
make use of APIs that rely on it, e.g. Update or Delete where the
filter is implied (i.e. not specified), all the APIs that end with
ById, etc.
Workaround single Primary Key limitation
A potential workaround to support tables with multiple primary keys is
to create an auto generated Id property that returns a unique value
based on all the primary key fields,
So I tried to add this
public class PODetail {
public string Id { get { return this.ItemCode + "/" + this.POno; } }
public string ItemCode { get; set; }
public string NewItemCode { get; set; }
public string POno { get; set; }
:
}
But when it went to execute :
Db.SingleFmt<PODetail>
It error out with ID not a valid column or column not found or something like that
So I then tried
public class PODetail {
//public string Id { get { return this.ItemCode + "/" + this.POno; } }
[PrimaryKey]
public string ItemCode { get; set; }
public string NewItemCode { get; set; }
[PrimaryKey]
public string POno { get; set; }
:
}
and it worked on the Db.SingleFmt ... and the Db.Update
So then I added back in the CaptureSqlFilter to see what the query looked like and I got
UPDATE "PODetail" SET "NewItemCode"=#NewItemCode, "Vendor"=#Vendor, "ActionCode"=#ActionCode, "Price"=#Price, "NewPrice"=#NewPrice, "CostPrice"=#CostPrice, "QtyOrd"=#QtyOrd, "QtyRcv"=#QtyRcv, "QtySPO"=#QtySPO, "QtyPend"=#QtyPend, "BackOrder"=#BackOrder, "OpenOrder"=#OpenOrder, "OrderDate"=#OrderDate, "InvoiceNo"=#InvoiceNo, "InvoiceVendor"=#InvoiceVendor, "InvoiceDate"=#InvoiceDate, "InvoiceDiscount"=#InvoiceDiscount, "QtyCancel"=#QtyCancel, "Qtylabels"=#Qtylabels, "REOVendor"=#REOVendor, "CurrentRcvQty"=#CurrentRcvQty, "SOPickQty"=#SOPickQty, "SOItem"=#SOItem, "QtyOther"=#QtyOther, "BackOrderCode"=#BackOrderCode WHERE "ItemCode"=#ItemCode AND "POno"=#POno
Which is what I wanted in the first place.
It works but what is the deal can you have the [PrimaryKey] attribute multiple times (it appears so) and also then why didn't the autogenerated Id work? Just wondering if I am missing something or not understanding the documentation correctly.
Oh and sorry for posting in the comments!
what I do I need to change so that it knows both PoNo and ItemCode are
primary Keys
OrmLite's primary limitation is that each Table has a single primary Key.
Also you can use the built-in Profiling or debug logging to view the generated SQL without needing to change code to use CaptureSqlFilter.
I'd also recommend that you don't use the Request DTO for anything other than defining your Service with. You can use the built-in AutoMapping to easily use it to populate your data model.
I am a .Net developer and is currently exploring on ArangoDB. I have played around with the arangod web user interface and arangod and like this NoSql very much until I delve into the detail of coding. I could not find the .Net driver working properly. Even for simple CRUD operation. Here's the problem.
ArangoClient.AddConnection("127.0.0.1", 8529, false, "Sample", "Sample");
var db = new ArangoDatabase("Sample");
string collectionName = "MyTestCollection";
var collection = new ArangoCollection();
collection.Name = collectionName;
collection.Type = ArangoCollectionType.Document;
if (db.Collection.Get(collectionName) == null)
{
db.Collection.Create(collection);
}
var employee = new Employee();
employee.Id = "1234";
employee.Name = "My Name";
employee.Salary = 33333;
employee.DateOfBirth = new DateTime(1979, 7, 22);
db.Document.Create<Employee>("MyTestCollection", employee);
employee.Name = "Tan";
db.Document.Update(employee);
It thrown the error for db.Document.Update(employee). Here's the error message: Field '_id' does not exist.
Then I tried to add the field _id though I think it is weird, it prompted me another error message.
Arango.Client.ArangoException : ArangoDB responded with error code BadRequest:
expecting PATCH /_api/document/<document-handle> [error number 400]
at Arango.Client.Protocol.DocumentOperation.Patch(Document document, Boolean waitForSync, String revision)
at Arango.Client.ArangoDocumentOperation.Update[T](T genericObject, Boolean waitForSync, String revision) ...
I have no clues at all and do not know how to to proceed further. Any help will be much appreciated. Thanks.
This is likely due to the definition of the Employee class, which is not contained in the above snippet.
To identify a document in a collection, documents have special system attributes, such as _id, _key and _rev. These attributes should be mapped to properties in .NET classes, even if not used explicitly. So one property in the class should be tagged with "Identity", one with "Key", and one with "Revision". Here is an example class definition that should work:
public class Employee
{
/* this will map the _id attribute from the database to ThisIsId property */
[ArangoProperty(Identity = true)]
public string ThisIsId { get; set; }
/* this will map the _key attribute from the database to the Id property */
[ArangoProperty(Key = true)]
public string Id { get; set; }
/* here is _rev */
[ArangoProperty(Revision = true)]
public string ThisIsRevision { get; set; }
public DateTime DateOfBirth { get; set; }
public string Name { get; set; }
public int Salary { get; set; }
public Employee()
{
}
}
The ThisIsId property will contain the automatically assigned _id value, and can also be used to retrieve the document easily later:
var employeeFromDatabase = db.Document.Get<Employee>(employee.ThisIsId);
You can of course rename the properties to your like.
here are my entities:
public abstract class ResourceBase
{
[Key]
int Id { get; set; }
[ForeignKey("Resource")]
public Guid ResourceId { get; set; }
public virtual Resource Resource { get; set; }
}
public class Resource
{
[Key]
public Guid Id { get; set; }
public string Type { get; set; }
}
public class Message : ResourceBase
{
[MaxLength(300)]
public string Text { get; set; }
}
And then my query is something like this:
var msgs = messages.Where(x=>x.Id == someRangeOfIds).Include(m=>m.Resource).Select(x => new
{
message = x,
replyCount = msgs.Count(msg => msg.Id = magicNumber)
});
I am running this with proxy creation disabled, and the result is all the messages BUT with all the Resource properties as NULL. I checked the database and the Resources with matching Guids are there.
I drastically simplified my real life scenario for illustration purposes, but I think you'll find you can reproduce the issue with just this.
Entity Framework 5 handles inherited properties well (by flattening the inheritence tree and including all the properties as columns for the entity table).
The reason this query didn't work was due to the projection after the include. Unfortunately, the include statement only really works when you are returning entities. Although, I did see mention of a solution which is tricky and involves invoking the "include" after the shape of the return data is specified... If anyone has more information on this please reply.
The solution I came up with was to just rephrase the query so I get all messages in one query, and then in another trip to the database another query that gets all the reply counts.
2 round trips when it really should only be 1.
If I'm trying to serialize a normal CLR object, and I do not want a particular member variable to be serialized, I can tag it with the
[NonSerialized]
attribute. If I am creating a table services entity, is there an equivalent attribute I can use to tell Azure table services to ignore this property?
For Version 2.1 there is a new Microsoft.WindowsAzure.Storage.Table.IgnoreProperty attribute. See the 2.1 release notes for more information: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/announcing-storage-client-library-2-1-rtm.aspx.
There's no equivalent I know of.
This post says how you can achieve the desired effect - http://blogs.msdn.com/b/phaniraj/archive/2008/12/11/customizing-serialization-of-entities-in-the-ado-net-data-services-client-library.aspx
Alternatively, if you can get away with using "internal" rather than "public" on your property then it will not get persisted with the current SDK (but this might change in the future).
For version 2.0 of the Table Storage SDK there is a new way to achieve this.
You can now override the WriteEntity method on TableEntity and remove any entity properties that have an attribute on them. I derive from a class that does this for all my entities, like:
public class CustomSerializationTableEntity : TableEntity
{
public CustomSerializationTableEntity()
{
}
public CustomSerializationTableEntity(string partitionKey, string rowKey)
: base(partitionKey, rowKey)
{
}
public override IDictionary<string, EntityProperty> WriteEntity(Microsoft.WindowsAzure.Storage.OperationContext operationContext)
{
var entityProperties = base.WriteEntity(operationContext);
var objectProperties = this.GetType().GetProperties();
foreach (PropertyInfo property in objectProperties)
{
// see if the property has the attribute to not serialization, and if it does remove it from the entities to send to write
object[] notSerializedAttributes = property.GetCustomAttributes(typeof(NotSerializedAttribute), false);
if (notSerializedAttributes.Length > 0)
{
entityProperties.Remove(property.Name);
}
}
return entityProperties;
}
}
[AttributeUsage(AttributeTargets.Property)]
public class NotSerializedAttribute : Attribute
{
}
Then you can make use of this class for your entities like
public class MyEntity : CustomSerializationTableEntity
{
public MyEntity()
{
}
public string MySerializedProperty { get; set; }
[NotSerialized]
public List<string> MyNotSerializedProperty { get; set; }
}
Hey guys,
I'm trying to create a TPH mapping on a hierarchy where the discriminating clause is the classical "IS NOT NULL" / "IS NULL" case.
Here is the example, database wise:
CREATE TABLE info.EducationTypes
(
ID INT NOT NULL PRIMARY KEY,
Name NVARCHAR(64) NOT NULL,
FKParentID INT NULL REFERENCES info.EducationTypes(ID)
)
the idea is to have a class hierarchy like the following one:
public abstract class EducationType
{
public int ID { get; set; }
public string Name { get; set; }
}
public class MainEducationType : EducationType
{
public IEnumerable<SubEducationType> SubTypes { get; set; }
}
public class SubEducationType : EducationType
{
public MainEducationType MainType { get; set; }
}
I got this schema "working" in the classic xml model, but I really can't find a way to get it working by using the code first approach. This is what I tried...
var educationType = modelBuilder.Entity<EducationType>();
educationType.Map<MainEducationType>(m => m.Requires("FKParentID").HasValue(null));
educationType.Map<SubEducationType>(m => m.Requires("FKParentID"));
Do you have any suggestion?
Unfortunately, having a null value for the discriminator column in TPH mapping is not currently supported in CTP5. This is confirmed by EF team on here and also here. They are looking at it to see if they can make it work for the RTM though.