We're switching over from Lokad to CloudFx to handle putting things in/out of table storage.
With Lokad, we had 3 entities:
1) Object 1, which added partition key and row key to
2) Object 2 which had a bunch of properties as well as an instance of
3) Object 3, which had its own set of properties
As far as I can tell, the only way for CloudFx to input that info into table storage is to flatten that whole thing out with one massive object that has all the properties of the previous three objects. With Lokad, we could just useSkinny=true.
Thoughts?
This is the method we ended up implementing. Short version: serialize object 2 and stuff it into the Value property of Object 1, then put in table storage.
Object 1:
public class CloudEntity<T>
{
public CloudEntity()
{
Timestamp = DateTime.UtcNow;
}
public string RowKey { get; set; }
public string PartitionKey { get; set; }
public DateTime Timestamp { get; set; }
public T Value { get; set; }
}
Object 2:
public class Store
{
public string StoreId { get; set; }
public string StoreName { get; set; }
public string StoreType { get; set; }
public Address MailingAddress { get; set; }
public PhoneNumber TelephoneNumber { get; set; }
public StoreHours StoreHours { get; set; }
}
Object 3 can be whatever...the address in this case perhaps...it all gets serialized.
So in code you can get the table as follows (more than 1 way to do this):
var tableStorage = new ReliableCloudTableStorage(connection string you're using);
Then let's say you have an instance of store() you want to put in table storage:
var myStore = new Store(
{
storeId = "9832",
storeName = "Headquarters"
...
});
You can do so in the following way:
var cloudEntity = new CloudEntity<string>
{
PartitionKey = whatever you want your partition key to be,
RowKey = whatever you want your row key to be,
Value = JsonConvert.SerializeObject(myStore) // THIS IS THE MAGIC
};
tableStorage.Add<CloudEntity<string>>(name of table in table storage, cloudEntity);
The entity put in table storage will have all the properties of the CloudEntity class (row key, partition key, etc) and in the "Value" column there will be the json of the object you wanted to store. It's easily readable through Azure Storage Explorer which is nice too.
To get the object back out, use something like this:
var cloudEntity = tableStorage.Get<CloudEntity<string>>(name of your table, partitionKey: whatever the partition key is);
Then you can deserialize the "Value" field of those into the object you're expecting:
var myStore = JsonConvert.DeserializeObject<Store>(cloudEntity.Value);
If you're getting a bunch back, just make myStore a list and loop through the cloudEntities, deserializing and adding each to your list.
Hope this helps!
Related
My model SecPermission has the column Id = int which is Int32. When I add a new record why is it returning the newly added ID as Int64?
Service method
public object Post(AddPermission request)
{
var perm = request.ConvertTo<SecPermission>();
perm.AuditUserId = UserAuth.Id;
LogInfo(typeof(SecPermission), request, LogAction.Insert);
return Db.Insert(perm);
}
Unit Test code
using (var service = HostContext.ResolveService<SecurityService>(authenticatedRequest))
{
///**this line is returning an object with Int64 in it.
int id = (int) service.Post(new AddPermission { Name = name, Description = "TestDesc" });
service.Put(new UpdatePermission { Id = permission, Name = name,Description = "TestDesc" });
service.Delete(new DeletePermission { Id = Convert.ToInt32(id)});
}
public class SecPermission : IAudit
{
[AutoIncrement]
[PrimaryKey]
public int Id { get; set; }
[Required]
[StringLength(50)]
public string Name { get; set; }
[Required]
[StringLength(75)]
public string Description { get; set; }
[Required]
public PermissionType PermissionType { get; set; }
public int AuditUserId { get; set; }
public DateTime AuditDate { get; set; } = DateTime.Now;
}
You should never return a Value Type in a ServiceStack Service, it needs to be a reference Type, typically a Typed Response DTO but can also be a raw data type like string or byte[] but it should never be a Value Type like an integer which will fail to work in some ServiceStack features.
For this Service I'd either return the SecPermission object or a AddPermissionResponse object with the integer in the result value.
Please note that OrmLite Insert() API returns a long which is why you're seeing a long, however you need to either call Save() or specify selectIdentity:true in order to fetch the newly inserted id of an [AutoIncrement] primary key, e.g:
var newId = db.Insert(perm, selectIdentity:true);
or
Db.Save(perm);
var newId = perm.Id; //auto populated with auto incremented primary key
Also you don't need both [PrimaryKey] and [AutoIncrement] in OrmLite as [AutoIncrement] specifies a Primary Key on its own as does using an Id property convention.
Also if you're going to call the Service directly you may as well Type the Response to avoid casting, e.g:
public SecPermission Post(AddPermission request)
{
//...
Db.Save(perm);
return perm;
}
Then you don't need to cast when calling it directly, e.g:
var id = service.Post(new AddPermission { ... }).Id;
There's no behavioral difference in ServiceStack for using object or a typed response like SecPermission although it's preferably to specify it on your Request DTO using the IReturn<T> interface marker, e.g:
public AddPermission : IReturn<SecPermission> { ... }
As it enables end-to-end Typed APIs when called from Service Clients, e.g:
SecPermission response = client.Post(new AddPermission { ... });
Currently I have a part that has 3 fields (Name, Value1, Value2). I have everything working where I can do a Create/Edit/Delete on the part.
What I want to do now is have a grid with 3 columns (Name, Value1, Value2) and can have multiple rows (up to the user how many there will be). The save won't happen until the user done (save all rows in a single post back).
I haven't figured what is needed so a collection of items will get saved on post back.
Any suggestions on how to do this?
Thanks!
What you could have is to have, in the part, a collection of the records corresponding to (Name, Value1, Value2) by having your dbms create and manage a 1-to-n relationship.
For example, you would have
public class ThisIsYourPart : ContentPart<ThisIsYourPartRecord> {
// You can access the list of your records as
// yourPart.Record.YourRecords
}
public class ThisIsYourPartRecord : ContentPartRecord {
public ThisIsYourPartRecord () {
YourRecords= new List<YourRecordWithValues>();
}
public virtual IList<YourRecordWithValues> YourRecords{ get; set; }
}
public class YourRecordWithValues {
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual string Value1 { get; set; } // use your actual type
public virtual ThisIsYourPartRecord ThisIsYourPartRecord { get; set; }
}
public class YourMigration : DataMigrationImpl {
public int Create() {
SchemaBuilder.CreateTable("YourRecordWithValues ", table => table
.Column<int>("Id", col => col.Identity().PrimaryKey())
.Column<string>("Name", col => col.NotNull().Unlimited())
.Column<string>("Value1", col => col.NotNull().Unlimited())
.Column<int>("ThisIsYourPartRecord_Id"));
SchemaBuilder.CreateTable("ThisIsYourPartRecord", table => table
.ContentPartRecord());
}
}
Code like that should do it.
We used this kind of relations a lot in https://github.com/bleroy/Nwazet.Commerce
*edit:
of course, have all the code in the proper files and folders.
I am trying to retrieve some entities from a table. I am successfully able to get back strings but when I try to get GUID, it comes back empty (all zeros).
[DataContract]
public class myEntity : TableEntity
{
[DataMember(Name = "ID")]
public Guid Id { get; set; }
[DataMember(Name = "Name")]
public string Name { get; set; }
[DataMember(Name = "City")]
public string City { get; set; }
}
...
var storageAccount = CloudStorageAccount.Parse(conStr);
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference(tblName);
TableQuery<myEntity> query = new TableQuery<myEntity>().Where(string.Empty);
How do I get the correct value of GUIDs in table storage? Is it related to TableEntity.ReadEntity?
I don't think there is any extra operation required to get GUID properties. Have you cross checked by other tools if there is indeed values stored in GUID propeties of that table?
#RotemVaron, you can use the function TableEntity.ReadEntity Method (IDictionary<String, EntityProperty>, OperationContext) to deserializes the entity using the specified IDictionary<TKey, TValue> that maps property names to typed EntityProperty values. Then, getting the Guid value from the EntityProperty object.
There is a blog shows the sample Reading existing entities which traverse the properties include GuidValue from entity.
We have a DTO - Employee - with many (> 20) related DTOs and DTO collections. For "size of returned JSON" reasons, we have marked those relationships as [Ignore]. It is then up to the client to populate any related DTOs that they would like using other REST calls.
We have tried a couple of things to satisfy clients' desire to have some related Employee info but not all:
We created a new DTO - EmployeeLite - which has the most-requested fields defined with "RelatedTableNameRelatedFieldName" approach and used the QueryBase overload and that has worked well.
We've also tried adding a property to a request DTO - "References" - which is a comma-separated list of related DTOs that the client would like populated. We then iterate the response and populate each Employee with the related DTO or List. The concern there is performance when iterating a large List.
We're wondering if there a suggested approach to what we're trying to do?
Thanks for any suggestions you may have.
UPDATE:
Here is a portion of our request DTO:
[Route("/employees", "GET")]
public class FindEmployeesRequest : QueryDb<Employee> {
public int? ID { get; set; }
public int[] IDs { get; set; }
public string UserID { get; set; }
public string LastNameStartsWith { get; set; }
public DateTime[] DateOfBirthBetween { get; set; }
public DateTime[] HireDateBetween { get; set; }
public bool? IsActive { get; set; }
}
There is no code for the service (automagical with QueryDb), so I added some to try the "merge" approach:
public object Get(FindEmployeesRequest request) {
var query = AutoQuery.CreateQuery(request, Request.GetRequestParams());
QueryResponse<Employee> response = AutoQuery.Execute(request, query);
if (response.Total > 0) {
List<Clerkship> clerkships = Db.Select<Clerkship>();
response.Results.Merge(clerkships);
}
return response;
}
This fails with Could not find Child Reference for 'Clerkship' on Parent 'Employee'
because in Employee we have:
[Ignore]
public List<Clerkship> Clerkships { get; set; }
which we did because we don't want "Clerkships" with every request. If I change [Ignore] to [Reference] I don't need the code above in the service - the List comes automatically. So it seems that .Merge only works with [Reference] which we don't want to do.
I'm not sure how I would use the "Custom Load References" approach in an AutoQuery service. And, AFAIKT, the "Custom Fields" approach can't be use for related DTOs, only for fields in the base table.
UPDATE 2:
The LoadSelect with include[] is working well for us. We are now trying to cover the case where ?fields= is used in the query string but the client does not request the ID field of the related DTO:
public partial class Employee {
[PrimaryKey]
[AutoIncrement]
public int ID { get; set; }
.
.
.
[References(typeof(Department))]
public int DepartmentID { get; set; }
.
.
.
public class Department {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
.
.
.
}
So, for the request
/employees?fields=id,departmentid
we will get the Department in the response. But for the request
/employees?fields=id
we won't get the Department in the response.
We're trying to "quietly fix" this for the requester by modifying the query.SelectExpression and adding , "Employee"."DepartmentID" to the SELECT before doing the Db.LoadSelect. Debugging shows that query.SelectExpression is being modified, but according to SQL Profiler, "Employee"."DepartmentID" is not being selected.
Is there something else we should be doing to get "Employee"."DepartmentID" added to the SELECT?
Thanks.
UPDATE 3:
The Employee table has three 1:1 relationships - EmployeeType, Department and Title:
public partial class Employee {
[PrimaryKey]
[AutoIncrement]
public int ID { get; set; }
[References(typeof(EmployeeType))]
public int EmployeeTypeID { get; set; }
[References(typeof(Department))]
public int DepartmentID { get; set; }
[References(typeof(Title))]
public int TitleID { get; set; }
.
.
.
}
public class EmployeeType {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
}
public class Department {
[PrimaryKey]
public int ID { get; set; }
public string Name { get; set; }
[Reference]
public List<Title> Titles { get; set; }
}
public class Title {
[PrimaryKey]
public int ID { get; set; }
[References(typeof(Department))]
public int DepartmentID { get; set; }
public string Name { get; set; }
}
The latest update to 4.0.55 allows this:
/employees?fields=employeetype,department,title
I get back all the Employee table fields plus the three related DTOs - with one strange thing - the Employee's ID field is populated with the Employee's TitleID values (I think we saw this before?).
This request fixes that anomaly:
/employees?fields=id,employeetypeid,employeetype,departmentid,department,titleid,title
but I lose all of the other Employee fields.
This sounds like a "have your cake and eat it too" request, but is there a way that I can get all of the Employee fields and selective related DTOs? Something like:
/employees?fields=*,employeetype,department,title
AutoQuery Customizable Fields
Not sure if this is Relevant but AutoQuery has built-in support for Customizing which fields to return with the ?fields=Field1,Field2 option.
Merge disconnected POCO Results
As you've not provided any source code it's not clear what you're trying to achieve or where the inefficiency with the existing solution lies, but you don't want to be doing any N+1 SELECT queries. If you are, have a look at how you can merge disconnected POCO results together which will let you merge results from separate queries based on the relationships defined using OrmLite references, e.g the example below uses 2 distinct queries to join Customers with their orders:
//Select Customers who've had orders with Quantities of 10 or more
List<Customer> customers = db.Select<Customer>(q =>
q.Join<Order>()
.Where<Order>(o => o.Qty >= 10)
.SelectDistinct());
//Select Orders with Quantities of 10 or more
List<Order> orders = db.Select<Order>(o => o.Qty >= 10);
customers.Merge(orders); // Merge disconnected Orders with their related Customers
Custom Load References
You can selectively control which references OrmLite should load by specifying them when you call OrmLite's Load* API's, e.g:
var customerWithAddress = db.LoadSingleById<Customer>(customer.Id,
include: new[] { "PrimaryAddress" });
Using Custom Load References in AutoQuery
You can customize an AutoQuery Request to not return any references by using Db.Select instead of Db.LoadSelect in your custom AutoQuery implementation, e.g:
public object Get(FindEmployeesRequest request)
{
var q = AutoQuery.CreateQuery(request, Request);
var response = new QueryResponse<Employee>
{
Offset = q.Offset.GetValueOrDefault(0),
Results = Db.Select(q),
Total = (int)Db.Count(q),
};
return response;
}
Likewise if you only want to selectively load 1 or more references you can change LoadSelect to pass in an include: array with only the reference fields you want included, e.g:
public object Get(FindEmployeesRequest request)
{
var q = AutoQuery.CreateQuery(request, Request);
var response = new QueryResponse<Employee>
{
Offset = q.Offset.GetValueOrDefault(0),
Results = Db.LoadSelect(q, include:new []{ "Clerkships" }),
Total = (int)Db.Count(q),
};
return response;
}
I am facing an issue while inserting an entity. I am not sure what is wrong in there.
As I insert, I get an StorageClientException stating "value out of range".
My Table Service Entity looks like
public class Itinerary : TableServiceEntity
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public int Sex { get; set; }
public string ToPNR { get; set; }
public string ReturnPNR { get; set; }
public string ContactNumber { get; set; }
public DateTime TravelDate { get; set; }
public DateTime ReturnDate { get; set; }
}
The entity gets inserted when the complete itinerary details are provided, but for itinerary having only one side details, the insert method gets failed with the given exception.
Any help would be appreciated.
The problem I guess lies with your DateTime fields. If you haven't initialized them before storing your data in the Table Storage, then the values that will be assigned to these fields will be that of .NET DateTime.Min value. Unfortunately this value is out of bounds of Azure Table Storage. Hence it is always advisable to provide some value to your DateTime field in the Azure Table Storage. In case, you want to assign a default value to the same, use the CloudTableClient.MinSupportedDateTime property. This will initialize the field with the min value supported by Azure Storage -
http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storageclient.cloudtableclient.minsupporteddatetime.aspx
Hope this helps