Antlr4 contextSuperClass to add custom properties for a two pass interpreter - antlr4

I have just discovered contextSuperClass and have been experimenting with using it to provide scope annotations when building a symbol table in a first pass (I have a forward reference DSL).
I set the option in the grammar:
options {
tokenVocab=MyLexer;
language = CSharp;
contextSuperClass = interpreter.MyParserRuleContext;
}
and I have a class that derives from ParserRuleContext:
public class MyParserRuleContext : ParserRuleContext
{
public MyParserRuleContext()
{ }
public MyParserRuleContext(ParserRuleContext parent, int invokingStateNumber) : base(parent, invokingStateNumber)
{
}
public IScope ContextScope { get; set; }
}
So far so good. I use ParseTreeWalker with a listener (Enter/Exit methods) to walk the tree for the 1st pass and build the symbol table adding local scopes, etc into my ContextScope custom property.
The first issue is of course after the symbol table pass I am at the end of the token stream - the tree is walked.
The 2nd parse uses a visitor to evaluate the final result.
I have two questions:
How do I "reset" the parser so that it is at the root again without loosing scopes I have added into my custom property?
The second question is broader, but similar. Is this even a reasonable way to add scope annotations to the parse tree?
I have previously tried to use ParseTreeProperty<IScope> to add scope annotations, but the problem is similar. During the 2nd phase, the context objects provided in the visitor are not the same objects added to ParseTreeProperty<IScope> concurrent dictionary from the 1st pass - so they are not found. Between the 1st & 2nd passes I have only found parser.reset() as a way to start the parser over, and (of course) it appears to fully reset everything and I loose the any state I created in the 1st pass.
I am likely missing completely missing something here - so any help to put me in the right direction will be greatly appreciated.

Related

ArchUnit: how to test for imports of specific classes outside of current package?

To externalize UI strings we use the "Messages-class" approach as supported e.g. in Eclipse and other IDEs. This approach requires that in each package where one needs some UI strings there has to be a class "Messages" that offers a static method String getString(key) via which one obtains the actual String to display to the user. The Strings are internally accessed/fetched using Java's Resources mechanism for i18n.
Esp. after some refactoring - we again and again have accidental imports from a class Messages from a different package.
Thus I would like to create an archunit rule checking whether we only access classes called "Messages" from the very same package. I.e. each import of a class x.y.z.Messages is an error if the package x.y.z is not the same package as the current class (i.e. the class that contains the import)
I got as far as this:
#ArchTest
void preventReferencesToMessagesOutsideCurrentPackage(JavaClasses classes) {
ArchRule rule;
rule = ArchRuleDefinition.noClasses()
.should().accessClassesThat().haveNameMatching("Messages")
.???
;
rule.check(classes);
}
but now I got stuck at the ???.
How can one phrase a condition "and the referenced/imported class "Messages" is not in the same package as this class"?
I somehow got lost with all these archunit methods of which none seems to fit here nor lend itself to compose said condition. Probably I just can't see the forest for the many trees.
Any suggestion or guidance anyone?
You need to operate on instances of JavaAccess to validate the dependencies. JavaAccess provides information about the caller and the target such that you can validate the access dynamically depending on the package name of both classes.
DescribedPredicate<JavaAccess<?>> isForeignMessageClassPredicate =
new DescribedPredicate<JavaAccess<?>>("target is a foreign message class") {
#Override
public boolean apply(JavaAccess<?> access) {
JavaClass targetClass = access.getTarget().getOwner();
if ("Message".equals(targetClass.getSimpleName())) {
JavaClass callerClass = access.getOwner().getOwner();
return !targetClass.getPackageName().equals(callerClass.getPackageName());
}
return false;
}
};
ArchRule rule =
noClasses().should().accessTargetWhere(isForeignMessageClassPredicate);

XText: permit invalid cross reference

I need to build a grammer containing a cross reference, which may be invalid, i.e. points to a nonexisting target. A file containing such a reference should not yield an error, but only a warning. The generator would handle this as as a special case.
How can I do this with XText?
It's not possible to create valid cross references to non-existing targets in EMF.
I would suggest to go with EAttributes instead of EReferences:
Change the feature=[EClass|ID] by feature=ID in {YourDSL} grammar.
Provide a scope calculation utility like it's done in *scope_EClass_feature(context, reference)* method in the {YourDSL}ScopeProvider class. As this scoping methods simply use the eType of the given reference the reimplementation should be straightforward.
Use this scope calculation utility in {YourDSL}ProposalProvider to propose values for the introduced EAttribute.
Optionally you can use this utility in a validation rule to add a warning/info to this EAttribute if it's not "valid".
Finally use the utility in your generator to create output based on valid target eObjects.
I also ran into this problem when creating a DSL to provide declerations of variables for a none-declerative language for a transition pahse. This method works but ask yourself if you realy want to have those nasty may-references.
You can drop the auto generated error in you UI module only. To do so, provide an ILinkingDiagnosticMessageProvider and override the function getUnresolvedProxyMessage:
class DSLLinkingDiagnosticMessageProvider extends LinkingDiagnosticMessageProvider {
override getUnresolvedProxyMessage(ILinkingDiagnosticContext context) {
if(context.context instanceof YourReference) {
// return null so the your error is left out
null
} else {
// use super implementation for others
super.getUnresolvedProxyMessage(context)
}
}
}
All linker-errors for YourReference will be missed. But be aware that there will be a dummy referenced object with all fealds null. Exspecialy the name ist lost and you can not set it due to a CyclicLinkingException. But you may create a new method that sets the name directly.
Note that the dummy object will have the type you entered in your gramma. If its abstract you can easily check witch reference is not linked.

How to auto-generate early bound properties for Entity specific (ie Local) Option Set text values?

After spending a year working with the Microsoft.Xrm.Sdk namespace, I just discovered yesterday the Entity.FormattedValues property contains the text value for Entity specific (ie Local) Option Set texts.
The reason I didn't discover it before, is there is no early bound method of getting the value. i.e. entity.new_myOptionSet is of type OptionSetValue which only contains the int value. You have to call entity.FormattedValues["new_myoptionset"] to get the string text value of the OptionSetValue.
Therefore, I'd like to get the crmsrvcutil to auto-generate a text property for local option sets. i.e. Along with Entity.new_myOptionSet being generated as it currently does, Entity.new_myOptionSetText would be generated as well.
I've looked into the Microsoft.Crm.Services.Utility.ICodeGenerationService, but that looks like it is mostly for specifying what CodeGenerationType something should be...
Is there a way supported way using CrmServiceUtil to add these properties, or am I better off writing a custom app that I can run that can generate these properties as a partial class to the auto-generated ones?
Edit - Example of the code that I would like to be generated
Currently, whenever I need to access the text value of a OptionSetValue, I use this code:
var textValue = OptionSetCache.GetText(service, entity, e => e.New_MyOptionSet);
The option set cache will use the entity.LogicalName, and the property expression to determine the name of the option set that I'm asking for. It will then query the SDK using the RetrieveAttriubteRequest, to get a list of the option set int and text values, which it then caches so it doesn't have to hit CRM again. It then looks up the int value of the New_MyOptionSet of the entity and cross references it with the cached list, to get the text value of the OptionSet.
Instead of doing all of that, I can just do this (assuming that the entity has been retrieved from the server, and not just populated client side):
var textValue = entity.FormattedValues["new_myoptionset"];
but the "new_myoptionset" is no longer early bound. I would like the early bound entity classes that gets generated to also generate an extra "Text" property for OptionSetValue properties that calls the above line, so my entity would have this added to it:
public string New_MyOptionSetText {
return this.GetFormattedAttributeValue("new_myoptionset"); // this is a protected method on the Entity class itself...
}
Could you utilize the CrmServiceUtil extension that will generate enums for your OptionSets and then add your new_myOptionSetText property to a partial class that compares the int value to the enums and returns the enum string
Again, I think specifically for this case, getting CrmSvcUtil.exe to generate the code you want is a great idea, but more generally, you can access the property name via reflection using an approach similar to the accepted answer # workarounds for nameof() operator in C#: typesafe databinding.
var textValue = entity.FormattedValues["new_myoptionset"];
// becomes
var textValue = entity.FormattedValues
[
// renamed the class from Nameof to NameOf
NameOf(Xrm.MyEntity).Property(x => x.new_MyOptionSet).ToLower()
];
The latest version of the CRM Early Bound Generator includes a Fields struct that that contains the field names. This allows accessing the FormattedValues to be as simple as this:
var textValue = entity.FormattedValues[MyEntity.Fields.new_MyOptionSet];
You could create a new property via an interface for the CrmSvcUtil, but that's a lot of work for a fairly simple call, and I don't think it justifies creating additional properties.

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

App-level settings in DDD?

Just wanted to get the groups thoughts on how to handle configuration details of entities.
What I'm thinking of specifically is high level settings which might be admin-changed. the sort of thing that you might store in the app or web.config ultimately, but from teh DDD perspective should be set somewhere in the objects explicitly.
For sake of argument, let's take as an example a web-based CMS or blog app.
A given blog Entry entity has any number of instance settings like Author, Content, etc.
But you also might want to set (for example) default Description or Keywords that all entries in the site should start with if they're not changed by the author. Sure, you could just make those constants in the class, but then the site owner couldn't change the defaults.
So my thoughts are as follows:
1) use class-level (static) properties to represent those settings, and then set them when the app starts up, either setting them from the DB or from the web.config.
or
2) use a separate entity for holding the settings, possibly a dictionary, either use it directly or have it be a member of the Entry class
What strikes you all as the most easy / flexible? My concerns abou the first one is that it doesn't strike me as very pluggable (if I end up wanting to add more features) as changing an entity's class methods would make me change the app itself as well (which feels like an OCP violation). The second one feels like it's more heavy, though, especially if I then have to cast or parse values out of a dictionary.
I would say that that whether a value is configurable or not is irrelevant from the Domain Model's perspective - what matters is that is is externally defined.
Let's say that you have a class that must have a Name. If the Name is always required, it must be encapsulated as an invariant irrespective of the source of the value. Here's a C# example:
public class MyClass
{
private string name;
public MyClass(string name)
{
if(name == null)
{
throw new ArgumentNullException("name");
}
this.name = name;
}
public string Name
{
get { return this.name; }
set
{
if(value == null)
{
throw new ArgumentNullException("name");
}
this.name = value;
}
}
}
A class like this effectively protects the invariant: Name must not be null. Domain Models must encapsulate invariants like this without any regard to which consumer will be using them - otherwise, they would not meet the goal of Supple Design.
But you asked about default values. If you have a good default value for Name, then how do you communicate that default value to MyClass.
This is where Factories come in handy. You simply separate the construction of your objects from their implementation. This is often a good idea in any case. Whether you choose an Abstract Factory or Builder implementation is less important, but Abstract Factory is a good default choice.
In the case of MyClass, we could define the IMyClassFactory interface:
public interface IMyClassFactory
{
MyClass Create();
}
Now you can define an implementation that pulls the name from a config file:
public ConfigurationBasedMyClassFactory : IMyClassFactory
{
public MyClass Create()
{
var name = ConfigurationManager.AppSettings["MyName"];
return new MyClass(name);
}
}
Make sure that code that needs instances of MyClass use IMyClassFactory to create it instead of new'ing it up manually.

Resources