How to convert SqlExpression<T> into SqlExpression<TU> with ServiceStack OrmLite? - servicestack

I need to work with SqlExpression<T> in a private method but the result should be SqlExpression<TU> (due to my project context).
T and TU aims same structure (TU is a subset of T with some computed expressions)
I didn't find a way to convert SqlExpression<T> into SqlExpression<TU>.
In following code T = Product class and TU = Powder class.
// Sample code
private static SqlExpression<Powder> CreateDefaultExpression(IDbConnection db, IPowderListFilter request)
{
var ev = db.From<Product>()
// Joins are set here...
.Select(p => new
{
CustomRalId = Sql.As(p.CustomRalId, nameof(Powder.CustomRalId)),
Mass = Sql.As(Sql.Sum(p.Width * p.Height / 1000000) * 2 * Settings.LacqueringGramsPerSquareMeter / 1000, nameof(Powder.Mass))
});
// I need to do something like...
ev.ConvertTo<SqlExpression<Powder>>();
// or ...
return new SqlExpression<Powder>(ev);
}

You basically can’t, the typed SqlExpression is a query builder that you can’t just change the Type of and have it automatically erase and reapply all the mutations to the query builder using a different Type.
If reuse is the goal you’d need to use standard C# to DRY as much code as possible which won’t be much since all lambda expressions is typed to a different Type.

Related

which nested data structure should I use for FluidFramework

If I want to use nested data structure, which data structure is better choose.
eg.
For eg1 case, It seems can not load children DDS data in 'hasInitialized()'.
For eg2, it seems a little complicated.
Is anyone can show some example for this case?
A DataObject contains a collection of one or more DDSes and nesting them allows you to create organizational structure. DDSes themselves can also be nested and retrieved in the initialization steps.
There are a few ways you can solve eg1 and I highlight two below.
For eg2 you're right that it's a bit complicated and generally not a pattern we are pushing at the moment. There are some example in the repo but you will find that they use a slightly different developer model then the HelloWorld. The SimpleComponentEmbed and the Pond are the best starting points if you want to explore DataObject embedding.
Using root SharedDirectory for nesting
The root SharedDirectory allows you to create sub-directories. This is a great way to manage collections of nested maps. Using sub-directories instead of putting a SharedMap on the root ensures creation and setting happens as one atomic action instead of two actions, (one to create and one to set).
The current draw back to this at the moment is that you have to listen to events on the root directory and see if it corresponds to your sub-directory. Eventing on sub-directories is being tracked with Issue #1403
protected async initializingFirstTime() {
const subDirectory = this.root.createSubDirectory("sub-dir");
subDirectory.createSubDirectory("sub-sub-dir");
}
protected async hasInitialized() {
const subDirectory = this.root.getSubDirectory("sub-dir");
const ssDir = subDirectory.getSubDirectory("sub-sub-dir");
this.root.on("valueChanged", (changed: IDirectoryValueChanged) => {
if (changed.path === ssDir.absolutePath) {
const newValue = ssDir.get(changed.key);
console.log(newValue);
}
});
// An easy way to trigger/test the above handler without user interaction
setTimeout(() => {
ssDir.set("foo", "bar2");
}, 10000); // 10 seconds
}
Using Nested DDSes
You can also store nested DDSes. The thing you need to remember is that you are storing/ retrieving the handle to the DDS. In the below example we are creating map1 with a key/value. We are string map1 in map2 which is being stored on the root DDS. We will then get the value from map1 by first getting map2 then map1 then the value.
protected async initializingFirstTime() {
const map1 = SharedMap.create(this.runtime);
map1.set("foo", "bar");
const map2 = SharedMap.create(this.runtime);
map2.set("map1", map1.handle);
this.root.set("map2", map2.handle);
}
protected async hasInitialized() {
const map2 = await this.root.get<IFluidHandle<SharedMap>>("map2").get();
const map1 = await map2.get<IFluidHandle<SharedMap>>("map1").get();
console.log(map1.get("foo"));
}

How to convert a DTO to Domain Objects

I'm trying to apply ubiquitous language to my domain objects.
I want to convert a Data Transfer Object coming from a client into the domain object. The Aggregate's Constructor only accepts the required fields, and the rest of parameters should be passed using aggregate's API even when the Aggregate is being created(by say CreateAggregate command).
But the DTO to Aggregate mapping code becomes a bit messy:
if(DTO.RegistrantType == 0){
registrantType = RegistrantType.Person()
}
elseif(DTO.RegistrantType == 1){
registrantType = RegistrantType.Company()
}
//.....
//.....
var aggregate = new Aggregate(
title,
weight,
registrantType,
route,
callNumber,
)
//look at this one:
if(DTO.connectionType == 0){
aggregate.Route(ConnectionType.InCity(cityId))
}
elseif(DTO.connectionType == 1){
aggregate.Route(ConnectionType.Intercity(DTO.originCityId,DTO.DestinationCityId)
}
//..........
//..........
One thing I should mention is that this problem doesn't seem a domain specific problem.
How can I reduce these If-Else statements without letting my domain internals leakage, and with being sure that the aggregate(not a mapping tool) doesn't accept values that can invalide it's business rules, and with having the ubiquitous language applied?
Please don't tell me I can use AoutoMapper to do the trick. Please read the last part carefully.'
Thank you.
A typical answer would be to convert the DTO (which is effectively a message) into a Command, where the command has all of the arguments expressed as domain specific value types.
void doX(DTO dto) {
Command command = toCommand(dto)
doX(command)
}
void doX(Command command) {
// ...
aggregate.Route(command.connectionType)
}
It's fairly common for the toCommand logic use something like a Builder pattern to improve the readability of the code.
if(DTO.connectionType == 0){
aggregate.Route(ConnectionType.InCity(cityId))
}
elseif(DTO.connectionType == 1){
aggregate.Route(ConnectionType.Intercity(DTO.originCityId,DTO.DestinationCityId)
}
In cases like this one, the strategy pattern can help
ConnectionTypeFactory f = getConnectionFactory(DTO.connectionType)
ConnectionType connectionType = f.create(DTO)
Once that you recognize that ConnectionTypeFactory is a thing, you can think about building lookup tables to choose the right one.
Map<ConnectionType, ConnectionTypeFactory> lookup = /* ... */
ConnectionTypeFactory f = lookup(DTO.connectionType);
if (null == f) {
f = defaultConnectionFactory;
}
So why don't you use more inheritance
for example
class CompanyRegistration : Registration {
}
class PersonRegistraiton : Registration {
}
then you can use inheritance instead of your if/else scenario's
public class Aggregate {
public Aggregate (CompanyRegistration) {
registantType = RegistrantType.Company();
}
public Aggregate (PersonRegistration p) {
registrantType = RegistrantType.Person();
}
}
you can apply simmilar logic for say a setRoute method or any other large if/else situations.
Also, i know you don't want to hear it, you can write your own mapper (inside the aggegate) that maps and validates it's business logic
for example this idea comes from fluentmapper
var mapper = new FluentMapper.ThatMaps<Aggregate>().From<DTO>()
.ThatSets(x => x.title).When(x => x != null).From(x => x.title)
It isn't too hard to write your own mapper that allow this kind of rules and validates your properties. And i think it will improve readability

Setting types of parsed values in Antlr

I have a rule that looks like this:
INTEGER : [0-9]+;
myFields : uno=INTEGER COMMA dos=INTEGER
Right now to access uno I need to code:
Integer i = Integer.parseInt(myFields.uno.getText())
It would be much cleaner if I could tell antler to do that conversion for me; then I would just need to code:
Integer i = myFields.uno
What are my options?
You could write the code as action, but it would still be explicit conversion (eventually). The parser (like every parser) parses the text and then it's up to "parsing events" (achieved by listener or visitor or actions in ANTLR4) to create meaningful structures/objects.
Of course you could extend some of the generated or built-in classes and then get the type directly, but as mentioned before, at some point you'll always need to convert text to some type needed.
A standard way of handling custom operations on tokens is to embed them in a custom token class:
public class MyToken extends CommonToken {
....
public Integer getInt() {
return Integer.parseInt(getText()); // TODO: error handling
}
}
Also create
public class MyTokenFactory extends TokenFactory { .... }
to source the custom tokens. Add the factory to the lexer using Lexer#setTokenFactory().
Within the custom TokenFactory, override the method
Symbol create(int type, String text); // (typically override both factory methods)
to construct and return a new MyToken.
Given that the signature includes the target token type type, custom type-specific token subclasses could be returned, each with their own custom methods.
Couple of issues with this, though. First, in practice, it is not typically needed: the assignment var is statically typed, so as in the the OP example,
options { TokenLabelType = "MyToken"; }
Integer i = myFields.uno.getInt(); // no cast required
If Integer is desired & expected, use getInt(). If Boolean ....
Second, ANTLR options allows setting a TokenLabelType to preclude the requirement to manually cast custom tokens. Use of only one token label type is supported. So, to use multiple token types, manual casting is required.

Javascript coding format between hash and class [duplicate]

I'm curious what the difference is between the following OOP javascript techniques. They seem to end up doing the same thing but is one considered better than the other?
function Book(title) {
this.title = title;
}
Book.prototype.getTitle = function () {
return this.title;
};
var myBook = new Book('War and Peace');
alert(myBook.getTitle())
vs
function Book(title) {
var book = {
title: title
};
book.getTitle = function () {
return this.title;
};
return book;
}
var myBook = Book('War and Peace');
alert(myBook.getTitle())
The second one doesn't really create an instance, it simply returns an object. That means you can't take advantage of operators like instanceof. Eg. with the first case you can do if (myBook instanceof Book) to check if the variable is a type of Book, while with the second example this would fail.
If you want to specify your object methods in the constructor, this is the proper way to do it:
function Book(title) {
this.title = title;
this.getTitle = function () {
return this.title;
};
}
var myBook = new Book('War and Peace');
alert(myBook.getTitle())
While in this example the both behave the exact same way, there are differences. With closure-based implementation you can have private variables and methods (just don't expose them in the this object). So you can do something such as:
function Book(title) {
var title_;
this.getTitle = function() {
return title_;
};
this.setTitle = function(title) {
title_ = title;
};
// should use the setter in case it does something else than just assign
this.setTitle(title);
}
Code outside of the Book function can not access the member variable directly, they have to use the accessors.
Other big difference is performance; Prototype based classing is usually much faster, due to some overhead included in using closures. You can read about the performance differences in this article: http://blogs.msdn.com/b/kristoffer/archive/2007/02/13/javascript-prototype-versus-closure-execution-speed.aspx
Which is better can sometimes be defined by the context of their usage.
Three constraints of how I choose between Prototype and Closure methods of coding (I actively use both):
Performance/Resources
Compression requirements
Project Management
1. Performance/Resources
For a single instance of the object, either method is fine. Any speed advantages would most likely be negligible.
If I am instantiating 100,000 of these, like building a book library, then the Prototype Method would be preferred. All the .prototype. functions would only be created once, instead of these functions being created 100,000 times if using the Closure Method. Resources are not infinite.
2. Compression
Use the Closure Method if compression efficiency is important (ex: most browser libraries/modules). See below for explanation:
Compression - Prototype Method
function Book(title) {
this.title = title;
}
Book.prototype.getTitle = function () {
return this.title;
};
Is YUI compressed to
function Book(a){this.title=a}Book.prototype.getTitle=function(){return this.title};
A savings of about 18% (all spaces/tabs/returns). This method requires variables/functions to be exposed (this.variable=value) so every prototype function can access them. As such, these variables/functions can't be optimized in compression.
Compression - Closure Method
function Book(title) {
var title = title; // use var instead of this.title to make this a local variable
this.getTitle = function () {
return title;
};
}
Is YUI compressed to
function Book(a){var a=a;this.getTitle=function(){return a}};
A savings of about 33%. Local variables can be optimized. In a large module, with many support functions, this can have significant savings in compression.
3. Project Management
In a project with multiple developers, who could be working on the same module, I prefer the Prototype Method for that module, if not constrained by performance or compression.
For browser development, I can override the producton.prototype.aFunction from "production.js" in my own "test.js" (read in afterwords) for the purpose of testing or development, without having to modify the "production.js", which may be in active development by a different developer.
I'm not a big fan of complex GIT repository checkout/branch/merge/conflict flow. I prefer simple.
Also, the ability to redefine or "hijack" a module's function by a testbench can be beneficial, but too complicated to address here...
The former method is how JavaScript was intended to be used. The latter is the more modern technique, popularised in part by Douglas Crockford. This technique is much more flexible.
You could also do:
function Book(title) {
return {
getTitle: function () {
return title;
}
}
}
The returned object would just have an accessor called getTitle, which would return the argument, held in closure.
Crockford has a good page on Private Members in JavaScript - definitely worth a read to see the different options.
It's also a little bit about re-usability under the hood. In the first example with the Function.prototype property usage all the instances of the Book function-object will share the same copy of the getTitle method. While the second snippet will make the Book function execution create and keep in the heap 'bookshelf' different copies of the local closurable book object.
function Book(title) {
var book = {
title: title
};
book.getTitle = function () {
return this.title += '#';
};
return book;
}
var myBook = Book('War and Peace');
var myAnotherBook = Book('Anna Karenina');
alert(myBook.getTitle()); // War and Peace#
alert(myBook.getTitle()); // War and Peace##
alert(myAnotherBook.getTitle()); // Anna Karenina#
alert(myBook.getTitle());// War and Peace###
The prototype members exist in the only copy for all the new instances of the object on the other hand. So this is one more subtle difference between them that is not very obvious from the first sigh due to the closure trick.
here is an article about this
in general Book inharets from Book.prototype. In first example you add function to getTitle Book.prototype

Get parameter values from method at run time

I have the current method example:
public void MethodName(string param1,int param2)
{
object[] obj = new object[] { (object) param1, (object) param2 };
//Code to that uses this array to invoke dynamic methods
}
Is there a dynamic way (I am guessing using reflection) that will get the current executing method parameter values and place them in a object array? I have read that you can get parameter information using MethodBase and MethodInfo but those only have information about the parameter and not the value it self which is what I need.
So for example if I pass "test" and 1 as method parameters without coding for the specific parameters can I get a object array with two indexes { "test", 1 }?
I would really like to not have to use a third party API, but if it has source code for that API then I will accept that as an answer as long as its not a huge API and there is no simple way to do it without this API.
I am sure there must be a way, maybe using the stack, who knows. You guys are the experts and that is why I come here.
Thank you in advance, I can't wait to see how this is done.
EDIT
It may not be clear so here some extra information. This code example is just that, an example to show what I want. It would be to bloated and big to show the actual code where it is needed but the question is how to get the array without manually creating one. I need to some how get the values and place them in a array without coding the specific parameters.
Using reflection you can extract the parameters name and metadata but not the actual values :
class Program
{
static void Main(string[] args)
{
Program p = new Program();
p.testMethod("abcd", 1);
Console.ReadLine();
}
public void testMethod(string a, int b)
{
System.Diagnostics.StackTrace st = new System.Diagnostics.StackTrace();
StackFrame sf = st.GetFrame(0);
ParameterInfo[] pis = sf.GetMethod().GetParameters();
foreach (ParameterInfo pi in pis)
{
Console.Out.WriteLine(pi.Name);
}
}
}

Resources