I want to know if it is a way to trigger my Validation Code just for impex. What I mean is that my Code should validate a new Product created through Impex (not through backoffice). Here is my Code:
#Override
public void onValidate(final Object o, final InterceptorContext ctx) throws InterceptorException
if (o instanceof ProductModel)
{
final ProductModel product = (ProductModel) o;
if (ctx.isNew(product))
{
final String manufacturerName = enumerationService.getEnumerationName(product.getManufacturerName()); // if ManufacturerName is Null enumerationService throw "Parameter plainEnum must not be null!"
final String code = product.getCode().toString();
final String manufacturerProductId = product.getManufacturerProductId();
final int a = productLookupService.getProduct(code, manufacturerName, manufacturerProductId);
switch (a)
{
case 1:
throw new InterceptorException("Product already in ProductLookup");
case 2:
throw new InterceptorException(
"There are more than one product with the same " + code + " number in ProductLookup !");
default:
}
}
}
My Problem is that in BackOffice when I create a new Product I don´t have manufacturerName and manufacturerProductId fields.
Thanks! and sorry if I said something wrong, I am new in this :)
You said that "In BackOffice when i create a new Product i don´t have manufacturerName and manufacturerProductId fields".
This can also be the case for Impex. Most probably the impex you are using is specifying those two attributes now and that is why there is no problem.
If you want, you can make this two attributes mandatory and then nobody will be able to create a product without specifying the manufacturerName ad the manufacturerProductId.
I also believe that the backOffice will update to also include these two attributes during creation since they are mandatory.
You can specify that an attribute is mandatory in your {extensionName}-items.xml(under your type definition) using the optional flag like below:
<attribute qualifier="manufacturerProductId" type="String">
<persistence type="property"/>
<modifiers optional="false"/>
</attribute>
If these two attributes are not mandatory, you have to consider the case when a product will be created without having them(like your current backOffice situation).
However your Interceptor should take into consideration both cases (when you have these attributes specified during creation and when you don't)
Because of this you can edit your code to verify whether this two attributes are null or not before using them. You can add this right before trying to get the manufacturerName:
if (product.getManufacturerName() == null || product.getManufacturerProductId() == null) {
return;
}
Related
I need information on how to use the PXSubordinateSelector attribute with the Where type that you can allegedly set on the attribute. Does anybody know how to use this attribute?
Specifically, I need to filter the selector by a custom field in the EPCompanyTree table if possible. Not sure what tables this selector attribute usese. It seems to be tucked into the Acumatica black box. Something like this:
[PXSubordinateSelector(typeof(Where<EPCompanyTree.customField, Equal<{somevalue}>>))]
I've tried setting the Where to an arbitrary filter on the EPCompanyTree.sortorder field but, I'm getting an "is not bound" error when clicking on the lookup.
TIA!
The reason for this error is the defined Search in the GetCommand method of the PXSubordinateSelectorAttribute. Below is the disassembled code of that method:
private static Type GetCommand(Type where)
{
Type whereType = typeof(Where<CREmployee.userID, Equal<Current<AccessInfo.userID>>, Or<EPCompanyTreeMember.workGroupID, Owned<Current<AccessInfo.userID>>>>);
if (where != null)
{
whereType = BqlCommand.Compose(new Type[]
{
typeof(Where2<, >),
typeof(Where<CREmployee.userID, Equal<Current<AccessInfo.userID>>, Or<EPCompanyTreeMember.workGroupID, Owned<Current<AccessInfo.userID>>>>),
typeof(And<>),
where
});
}
return BqlCommand.Compose(new Type[]
{
typeof(Search5<, , , >),
typeof(CREmployee.bAccountID),
typeof(LeftJoin<EPCompanyTreeMember, On<EPCompanyTreeMember.userID, Equal<CREmployee.userID>>>),
whereType,
typeof(Aggregate<GroupBy<CREmployee.acctCD>>)
});
}
As you can see from code the Search is being organized on CREmployee with left joined EPCompanyTreeMember, meanwhile, your code is trying to add a condition on EPCompanyTree field which is not participating in the Search.
I'm trying to Weld my custom ContentPart SitesPart containing a ContentField of type TaxonomyField but it is not working for me. When i attach this part from UI it works perfectly fine and i see the TaxonomyField in edit mode as well as in display mode.
Following is the Activating method of my ContentHandler.
protected override void Activating(ActivatingContentContext context)
{
if (context.ContentType == "Page")
{
context.Builder.Weld<SitesPart>();
}
}
I tried to go deep into the Weld function and found out that it is not able to find correct typePartDefinition. It goes inside the condition if (typePartDefinition == null) which creates an empty typePartDefinition with no existing ContentFields etc.
// obtain the type definition for the part
var typePartDefinition = _definition.Parts.FirstOrDefault(p => p.PartDefinition.Name == partName);
if (typePartDefinition == null) {
// If the content item's type definition does not define the part; use an empty type definition.
typePartDefinition =
new ContentTypePartDefinition(
new ContentPartDefinition(partName),
new SettingsDictionary());
}
I would be highly thankful for any guidance.
Oh, you are totally right, the part is welded but if there are some content fields, they are not welded. The ContentItemBuilder try to retrieve the part definition through the content type definition on which we want to add the part. So, because it's not possible, a new content part is created but with an empty collection of ContentPartFieldDefinition...
I think that the ContentItemBuilder would need to inject in its constructor and use a ContentPartDefinition or more generally an IContentDefinitionManager... But, for a quick workaround I've tried the following that works
In ContentItemBuilder.cs, replace this
public ContentItemBuilder Weld<TPart>()...
With
public ContentItemBuilder Weld<TPart>(ContentPartDefinition contentPartDefinition = null)...
And this
new ContentPartDefinition(partName),
With
contentPartDefinition ?? new ContentPartDefinition(partName),
And in you part handler, inject an IContentDefinitionManager and use this
protected override void Activating(ActivatingContentContext context) {
if (context.ContentType == "TypeTest") {
var contentPartDefinition = _contentDefinitionManager.GetPartDefinition(typeof(FruitPart).Name);
context.Builder.Weld<FruitPart>(contentPartDefinition);
}
}
Best
To attach a content part to a content type on the fly, you can use this in your handler
Filters.Add(new ActivatingFilter<YourContentPart>("YourContentType"));
There are many examples in the source code
Best
I have a business requirement to only send permissioned properties in our response payload. For instance, our response DTO may have several properties, and one of them is SSN. If the user doesn't have permissions to view the SSN then I would never want it to be in the Json response. The second requirement is that we send null values if the client has permissions to view or change the property. Because of the second requirement setting the properties that the user cannot view to null will not work. I have to still return null values.
I have a solution that will work. I create an expandoObject by reflecting through my DTO and add only the properties that I need. This is working in my tests.
I have looked at implementing ITextSerializer. I could use that and wrap my response DTO in another object that would have a list of properties to skip. Then I could roll my own SerializeToString() and SerializeToStream(). I don't really see any other ways at this point. I can't use the JsConfig and make a SerializeFn because the properties to skip would change with each request.
So I think that implementing ITextSerializer is a good option. Are there any good examples of this getting implemented? I would really like to use all the hard work that was already done in the serializer and take advantage of the great performance. I think that in an ideal world I would just need to add a check in the WriteType.WriteProperties() to look and the property is one to write, but that is internal and really, most of them are so I can't really take advantage of them.
If someone has some insight please let me know! Maybe I am making the implementation of ITextSerialzer a lot harder that it really is?
Thanks!
Pull request #359 added the property "ExcludePropertyReference" to the JsConfig and the JsConfigScope. You can now exclude references in scope like I needed to.
I would be hesitant to write my own Serializer. I would try to find solutions that you can plug in into the existing ServiceStack code. That way you will have to worry less about updating dlls and breaking changes.
One potential solution would be decorating your properties with a Custom Attributes that you could reflect upon and obscure the property values. This could be done in the Service before Serialization even happens. This would still include values that they user does not have permission to see but I would argue that if you null those properties out they won't even be serialized by JSON anyways. If you keep all the properties the same they you will keep the benefits of strong typed DTOs.
Here is some hacky code I quickly came up with to demonstrate this. I would move this into a plugin and make the reflection faster with some sort of property caching but I think you will get the idea.
Hit the url twice using the following routes to see it in action.
/test?role
/test?role=Admin (hack to pretend to be an authenticated request)
[System.AttributeUsage(System.AttributeTargets.Property)]
public class SecureProperty : System.Attribute
{
public string Role {get;set;}
public SecureProperty(string role)
{
Role = role;
}
}
[Route("/test")]
public class Test : IReturn
{
public string Name { get; set; }
[SecureProperty("Admin")]
public string SSN { get; set; }
public string SSN2 { get; set; }
public string Role {get;set;}
}
public class TestService : Service
{
public object Get(Test request)
{
// hack to demo roles.
var usersCurrentRole = request.Role;
var props = typeof(Test).GetProperties()
.Where(
prop => ((SecureProperty[])prop
.GetCustomAttributes(typeof(SecureProperty), false))
.Any(att => att.Role != usersCurrentRole)
);
var t = new Test() {
Name = "Joe",
SSN = "123-45-6789",
SSN2 = "123-45-6789" };
foreach(var p in props) {
p.SetValue(t, "xxx-xx-xxxx", null);
}
return t;
}
}
Require().StartHost("http://localhost:8080/",
configurationBuilder: host => { });
I create this demo in ScriptCS. Check it out.
I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?
As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity
In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx
in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;
Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}
The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.
You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}
Im using subsonic 2.2
I tried asking this question another way but didnt get the answer i was looking for.
Basically i ususally include validation at page level or in my code behind for my user controls or aspx pages. However i haev seen some small bits of info advising this can be done within partial classes generated from subsonic.
So my question is, where do i put these, are there particular events i add my validation / business logic into such as inserting, or updating. - If so, and validation isnt met, how do i stop the insert or update. And if anyone has a code example of how this looks it would be great to start me off.
Any info greatly appreciated.
First you should create a partial class for you DAL object you want to use.
In my project I have a folder Generated where the generated classes live in and I have another folder Extended.
Let's say you have a Subsonic generated class Product. Create a new file Product.cs in your Extended (or whatever) folder an create a partial class Product and ensure that the namespace matches the subsonic generated classes namespace.
namespace Your.Namespace.DAL
{
public partial class Product
{
}
}
Now you have the ability to extend the product class. The interesting part ist that subsonic offers some methods to override.
namespace Your.Namespace.DAL
{
public partial class Product
{
public override bool Validate()
{
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.Errors.Add("ProductName cannot be empty");
return Errors.Count == 0;
}
// another way
protected override void BeforeValidate()
{
if (string.IsNullOrEmpty(this.ProductName))
throw new Exception("ProductName cannot be empty");
}
protected override void BeforeInsert()
{
this.ProductUUID = Guid.NewGuid().ToString();
}
protected override void BeforeUpdate()
{
this.Total = this.Net + this.Tax;
}
protected override void AfterCommit()
{
DB.Update<ProductSales>()
.Set(ProductSales.ProductName).EqualTo(this.ProductName)
.Where(ProductSales.ProductId).IsEqualTo(this.ProductId)
.Execute();
}
}
}
In response to Dan's question:
First, have a look here: http://github.com/subsonic/SubSonic-2.0/blob/master/SubSonic/ActiveRecord/ActiveRecord.cs
In this file lives the whole logic I showed in my other post.
Validate: Is called during Save(), if Validate() returns false an exception is thrown.
Get's only called if the Property ValidateWhenSaving (which is a constant so you have to recompile SubSonic to change it) is true (default)
BeforeValidate: Is called during Save() when ValidateWhenSaving is true. Does nothing by default
BeforeInsert: Is called during Save() if the record is new. Does nothing by default.
BeforeUpdate: Is called during Save() if the record is new. Does nothing by default.
AfterCommit: Is called after sucessfully inserting/updating a record. Does nothing by default.
In my Validate() example, I first let the default ValidatColumnSettings() method run, which will add errors like "Maximum String lenght exceeded for column ProductName" if product name is longer than the value defined in the database. Then I add another errorstring if ProductName is empty and return false if the overall error count is bigger than zero.
This will throw an exception during Save() so you can't store the record in the DB.
I would suggest you call Validate() yourself and if it returns false you display the elements of this.Errors at the bottom of the page (the easy way) or (more elegant) you create a Dictionary<string, string> where the key is the columnname and the value is the reason.
private Dictionary<string, string> CustomErrors = new Dictionary<string, string>
protected override bool Validate()
{
this.CustomErrors.Clear();
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.CustomErrors.Add(this.Columns.ProductName, "cannot be empty");
if (this.UnitPrice < 0)
this.CustomErrors.Add(this.Columns.UnitPrice, "has to be 0 or bigger");
return this.CustomErrors.Count == 0 && Errors.Count == 0;
}
Then if Validate() returns false you can add the reason directly besides/below the right field in your webpage.
If Validate() returns true you can safely call Save() but keep in mind that Save() could throw other errors during persistance like "Dublicate Key ...";
Thanks for the response, but can you confirm this for me as im alittle confused, if your validating the column (ProductName) value within validate() or the beforevalidate() is string empty or NULL, doesnt this mean that the insert / update has already been actioned, as otherwise it wouldnt know that youve tried to insert or update a null value from the UI / aspx fields within the page to the column??
Also, within asp.net insert or updating events we use e.cancel = true to stop the insert update, if beforevalidate failes does it automatically stop the action to insert or update?
If this is the case, isnt it eaiser to add page level validation to stop the insert or update being fired in the first place.
I guess im alittle confused at the lifecyle for these methods and when they come into play