According to the occurrence of questions about 'AutoMapper Migraton from static API' lately and the fact I was Exploring Christos Sakell blog 'Building Single Page Applications using Web API and AngularJS', I also came across some obsolete warnings.
/***********************************************************************************
* AutoMapper.Mapper.CreateMap()' is obsolete: 'Dynamically creating maps will be removed in version 5.0.
* Use a MapperConfiguration instance and store statically as needed, or Mapper.Initialize.
* Use CreateMapper to create a mapper instance.
***********************************************************************************/
Completely ignorant of recent changes of AutoMapper (I using this mapping tool for the very first time) I was trying to get rid of those obsolete messages. Searching around I came across two possible solutions:
1) http://quabr.com/36398318/automapper-mapper-createmaptsource-tdestination-is-obsolete , and
2) http://davecallan.com/automapper-4-2-example/#comment-8914
These hints seemed to be pretty straightforward. It turned out tweaking the *MappingProfile classes weren't that difficult, but what to do on several controllers? That was a complete different piece of cake. I got completely lost. I ran in all kind of strange errors not knowing or having a clue what to do. Even the just mentioned blogs weren’t that explanative. More or less, I got the feeling they were keeping the puzzle a live (I'm kidding).
So the question was: how to solve the troublesome mapping from the controller to the *MappingProfile class. Luckily, I found a possible solution, so please tackle it...
I do like puzzling and buzzing around the net, I finally found a combination of hints, and that's what I want to share with you.
The solution for this is a combination of tweaks.
First of all, have a look at page 'Migrating from static API' (https://github.com/AutoMapper/AutoMapper/wiki/Migrating-from-static-API).
Notice the second block right below ‘In 4.2.x and later …’ that’s what is needed on several controllers. Dave Callan just gave a part of the clue. He just forgot a part of the solution. So I asked him to add the missing part to his solution as well.
So, implementing the hint on the 'Migrating from static API' page in combination with the suggestion of Dave Callan (http://davecallan.com/automapper-4-2-example/#comment-8919) block 1: Instantiating a new MapperConfiguration(...) in *MappingProfile class(es), you can get rid of those obsolete warnings: On every cycle proper mapping.
At least, I got rid of those warnings.
What will happend in the nearer future with AutoMapper, hopefully those guys in Austin TX will make proper decisions on this matter.
I also testing the code (block 2 of 'Migration from static API') with the suggestion on Quabr (instead of instantiating a new MapperConfiguration, they suggest to call the Initialize method of the current Mapper). This combination gave my errors like ‘index outside matrix borders’.
Apparently on each cycle the mapping lists are not renewed. This combination of code tweaks resulted (to me) in a ‘no go’. Perhaps others, do not agree with me, fine, at least that’s what I noticed.
My only goal is to help other members who are facing comparable trouble.
I'm pleased to share my solution. Have fun
Related
Using JOOQ version 3.13.4
I don't like the ASCII table formatting that the Record.toString() method does.
It's particularly unhelpful in the IDEA debugger view:
But I also don't like dumping needless multi-line strings into my prod logs - which I usually avoid doing; partially for obvious speed/size reasons, but also because they're ugly and hard to read in most log viewer apps (CloudWatch etc.)
I found the Github issue that implemented the ASCII table: https://github.com/jOOQ/jOOQ/issues/1806
I know I can write code to customise this, but I'm wondering if there's a simple flag/config I'm missing to get a simple summary (if I ever got around to writing my own, I'll use JSON)?
The current formatted ASCII table is quite useful for most debugging purposes, which include simple println() calls, Eclipse's debugging view (see below), etc. This includes showing only the first 5 records in a Result for a quick overview, and Record classes work the same.
The feedback from the community has generally been good for these defaults, but if you want to challenge them, try your luck here, and see if your feature request gets community upvotes: https://github.com/jOOQ/jOOQ/issues/new/choose
Eclipse's variable view:
As you can see, with the detail view, it's much less of a problem in Eclipse. I've always found it surprising IntelliJ didn't copy this very useful feature from Eclipse. Maybe, request it?
I know I can write code to customise this
I'll document it here nonetheless, as future visitors will no doubt find this useful.
IntelliJ has a lot of "smartness" in its debugger's variables view, sometimes a bit excessive. For example, an org.jooq.Result just displays its size(), because a Result is a List. I guess it's smart to hedge against excessive efforts if JDK lists are too long, though jOOQ results can handle it, and otherwise it would be possible to display list.subList(0, Math.min(list.size(), 5)) + "...".
Luckily, you can easily specify your own type renderers in IntelliJ, for example this one:
It uses this code to render a jOOQ Record:
formatJSON(new JSONFormat()
.header(false)
.recordFormat(JSONFormat.RecordFormat.OBJECT)
)
The output might look like this:
You might even suggest this as a default to IntelliJ?
but I'm wondering if there's a simple flag/config I'm missing to get a simple summary (if I ever got around to writing my own, I'll use JSON)?
No, there isn't, and I doubt that kind of configurability is reasonable. People will start requesting 100s of flags to get their preferred way of debugging, when in fact, this can be very easily achieved in the IDE as shown above.
Again, you can try your luck with a change request to change it for everyone, but a configuration for this is unlikely to be implemented.
So I've recently started using Typescript. Both for the back end (nodejs) and front end, and I'm started to feel an urge to chew on my own arm every time I have to add a new import.
Coming from the .NET world the last 15 years I've come to appreciate the more or less automatic type resolving. Especially with a background in C/C++, which whenever I return to, remind me of the #include hell that ever so often becomes the case. Given what I am now facing, I get a feeling of "those were the days".
I typically prefer to keep my code as one class -> one file (with some obv exceptions for smaller stuff). This results in a lot of files and even more imports. I recently discovered some tools that help out creating the imports but it is still really annoying.
I get it that the underlying JS needs these imports (in various ways depending on module system) But given how easy these tools resolves the imports. Would it not be possible for the compiler to simply generate them? In the rare case of ambiguity the compiler would simple give an error and the user needs to resolve it manually.
Typescript seems to be a great language otherwise but this is really close to a deal breaker for me. Or am I missing something? Can this be done in a better way?
On one hand the advice to always close objects is so common that I would feel foolish to ignore it (e.g. VBScript Out Of Memory Error).
However it would be equally foolish to ignore the wisdom of Eric Lippert, who appears to disagree: http://blogs.msdn.com/b/ericlippert/archive/2004/04/28/when-are-you-required-to-set-objects-to-nothing.aspx
I've worked to fix a number of web apps with OOM errors in classic asp. My first (time consuming) task is always to search the code for unclosed objects, and objects not set to nothing.
But I've never been 100% convinced that this has helped. (That said, I have found it hard to pinpoint exactly what DOES help...)
This post by Eric is talking about standalone VBScript files, not classic ASP written in VBScript. See the comments, then Eric's own comment:
Re: ASP -- excellent point, and one that I had not considered. In ASP it is sometimes very difficult to know where you are and what scope you're in.
So from this I can say that everything he wrote isn't relevant for classic ASP i.e. you should always Set everything to Nothing.
As for memory issues, I think that assigning objects (or arrays) to global scope like Session or Application is the main reason for such problems. That's the first thing I would look for and rewrite to hold only single identifider in Session then use database to manage the data.
Basically by setting a COM object to Nothing, you are forcing its terminator to run deterministically, which gives you the opportunity to handle any errors it may raise.
If you don't do it, you can get into a situation like the following:
Your code raises an error
The error isn't handled in your code and therefore ...
other objects instantiated in your code go out of scope, and their terminators run
one of the terminators raises an error
and the error that is propagated is the one from the terminator going out of scope, masking the original error.
I do remember from the dark and distant past that it was specifically recommended to close ADO objects. I'm not sure if this was because of a bug in ADO objects, or simply for the above reason (which applies more generally to any objects that can raise errors in their terminators).
And this recommendation is often repeated, though often without any credible reason. ("While ASP should automatically close and free up all object instantiations, it is always a good idea to explicitly close and free up object references yourself").
It's worth noting that in the article, he's not saying you should never worry about setting objects to nothing - just that it should not be the default behaviour for every object in every script.
Though I do suspect he's a little too quick to dismiss the "I saw this elsewhere" method of coding behaviour, I'm willing to bet that there is a reason Eric didn't consider that has caused this to be passed along as a hard 'n' fast rule - dealing with junior programmers.
When you start looking more closely at the Dreyfus model of skill acquisition, you see that at the beginning levels of acquiring a new skill, learners need simple to follow recipes. They do not yet have the knowledge or ability to make the judgement calls Eric qualifies the recommendation with later on.
Think back to when you first started programming. Could you readily judge if you were "set[tting an] expensive objects to Nothing when you are done with them if you are done with them well before they go out of scope"? Did you really know which objects were expensive or when they truly went out of scope?
Thus, most entry level programmers are simply told "always set every object to Nothing when you are done with it" because it is within their grasp to understand and follow. Unfortunately, not many programmers take the time to self-educate, learn, and grow into the higher-level Dreyfus stages where you can use the more nuanced situational approach.
And then we come back to my earlier statement - even the best of us started out at that earlier stage, where we reflexively closed all objects because that was the best we were capable of. We left large bodies of code that people look at now, and project our current competence backwards to the earlier work and assume we did that for reasons we don't understand.
I've got to get going, but I hope to expand this a little futher...
I'm real interested in using SubSonic, I've downloaded it and I'm enjoying it so far, but looking at the activity on github and googlegroups it doesn't seem to be very active and looks a lot like a project that's dying. There's no videos about it on tekpub and Rob seems to be using nHibernate for all his projects these days. I don't want to focus on learning SubSonic and integrating it into my projects if it's not going to live much longer.
So my question is what's happening with subsonic development, is there a new release imminent is there lots going on behind the scenes or is it as inactive as it seems?
I get this question, it seems, if I don't pop a release every 2 months or so. I will admit I'm behind on getting 3.0.0.4 out the door - but there's some patched code that people are sending in without tests and I will not accept that - I'd rather take my time and make sure we don't push bugs (which I apparently did with 3.0.0.3).
Anyway - it's a valid question and no, SubSonic isn't "dying". The best place to see the activity is on the Github site itself:
http://github.com/subsonic/
This is one of the main reasons I chose Github, so people can see the activity. I just pulled in a number of changes and am waiting on a last one to get tweaked (there were merge conflicts).
RE your other points:
No, I'm not using NHibernate for my work. I'm using it for Kona and a screencast. I answer just about all the email I get from out group but yes, GoogleGroups is a sad thing when it comes to pruning the spam. Your best bet is to just email the group list for a question - it will get answered pretty quickly.
In terms of "death" - I need to talk about that a bit. Open Source projects are incapable of dying if they were born in a fit of inspiration and people find it useful. Both are true of SubSonic. Even if I gave up and told everyone to f-off, someone would pick it up and run with it. I do have to work, like most people, and I have to fit SubSonic into the little amounts of freetime I have between work and family. But there's no way I'd let this die - it means far too much to me.
Either way - I'm sure I'll be back here again in 6 months, answering this question again :).
I suspect since its that time of year people are on holiday/vacation so support here is reduced. I have just started using it and havent had responses to some questions and the last release was in July, so am hoping support continues.
I must state that although there isn't a new release every 2 months as Rob stated that you may get that feeling sometimes. Although there is still action on the google group and github. If anything before christmas there were more fresh faces starting to make contributions than before (even simple ones like doco) this shows me that there may be more interest than ever, its just that people are getting on with it.
My work uses Subsonic (both 2.2 and 3.0.0.3) in most projects where we have control over it. We have around 28 .net devs and they all love it (we don't get caught up in what it can't do as its not an ORM/data access say per se.
As we only use Subsonic for low level query tool and not as a data access layer i spose we're not too closely connected to it if we need to bail on it, but we are yet to have a reason too.
My point is this: Its a really really easy to use, easy to pick up, easy to modify, light weight querytool/ORM(to a lesser extent). There are few tools out there that have all these properties and yet don't lock you into a million schools of thought on things. Because of this i don't see it dieing any time soon - its too addictive a tool to have on your bat belt.
I'm an active record fan buoy and SubSonic Rocksorz My Sockorz!
Because of this i recommend SubSonic to a lot of people and will continue to. While we don't use it on extremely large projects (more for project continuity reasons like you mentioned than because it can't do the job)
Well.
I don't know how subsonic is progressing. I started use subsonic from 2007, before upgrade to subsonic3, I was pretty convenient with subnoic2. it is stable and predictable. But with subsonic3, even 3.0.0.3. It is somewhat disappointed for me. I don't want to mention the features that works. Thanks. Subsonic map table correctly. The thing I want to talk is about "Update". I tried with the code but it gave exception. After dig into the code, it is signing...
Look at my code:
FarmDB db = new FarmDB();
db.Update<UserAdornment>().Set(o => o.is_working == false)
.Where(o => o.user_name == HttpContext.Current.User.Identity.Name && o.type == userAdornment.type && o.id != userAdornment.id).Execute();
Is this correct?
After fixed the NullReferenceException some one asked which I suffered too. Each time I run this query, all my rows with user_name=currentname is set is_working to false. After checked code:
In update.cs
public Update<T> Where(Expression<Func<T, bool>> column)
{
LambdaExpression lamda = column;
Constraint c = lamda.ParseConstraint();
And check lamda.ParseConstraint();
I see, whatever how many 'where" I want to search, it only return the first one, the worse thing is after it,
//IColumn col = tbl.GetColumnByPropertyName(c.ColumnName);
//Constraint con = new Constraint(c.Condition, col.Name, col.QualifiedName, col.Name);
//con.ParameterName = col.PropertyName;
//con.ParameterValue = c.ParameterValue;
It built another constraint from previous one, but drop all the "condition" in last one.
How can it be right?
I don't look into subsonic's sourcecode too much and don't understand it how it is implemented well. But I am using subsonic3 in my project and highly depend on it to work correctly. Really hope every bug can be tested and fixed in time.
I was refactoring some code in a web application today and came across something like this in the base class for all webpages:
if (Request.QueryString["IgnoreValidation"] != null)
{
if (Request.QueryString["IgnoreValidation"].ToUpper() == "TRUE")
{
SessionData.IgnoreValidation = true;
}
}
To me, this appears to be a Very Bad Thing™, so I instantly removed all traces of this flag from the code. For one, there were several if statements littered throughout that checked the value of the flag, leading to cluttered and unclear logic. Secondly, I came across another, more dangerous flag named "IgnoreCreditCardValidation". You can guess what that one did...
I then got to thinking about it and remembered a similar example from a previous job. In the code of an app sold as a "secure authentication module" there was a QueryString parameter used to override the default behavior, effectively allowing anyone with knowledge of it to bypass authentication.
Now my question is more of a confirmation, is this practice as bad as it seems in my head or am I just overreacting and this is commonplace? Are there any cases where there would be a valid reason to do this? To me it just seems like an awful mix of laziness and carelessness.
If this is a duplicate, please feel free to point me in the right direction.
Thanks!
It's horrifying whether it's common practice or not. +1 to you for nuking it with extreme prejudice.
I agree with you. Especially if the module is designed to enforce security, this is a stupid thing to have in a release build (it's not a good idea to have in debug builds either, but that might be reasonable.) It's essentially security-by-obscurity.
It's not you. Whoever wrote that has never dealt with public facing web applications. As you have correctly noted, anyone with knowledge of this 'backdoor' can wreck havoc on your application.
It's the original developer being lazy when it came to both design and testing.
The ideal solution is separate out the validation or credit card authentication etc from the code into separate dlls/services/etc. The real versions can then be replaced by mocks to facilitate testing of the site. The services can also be tested independently.
These mocks never get anywhere near the production server so you should never have backdoors into your code.
You can also replace any/all of the services without having to update your application - as long as the new service presents the same interface as the original.
I'm going against the grain and saying querystrings make good debug switches.
Before everyone jumps on me - using querystring to disable validation is horrendously stupid, a giant security hole, and should never happen. You were completely valid and justified nuking the code immediately.
But for debugging, querystrings work great. We have a few querystrings that will turn on IDs for a few objects on our pages so we can quickly get them without logging into the production database (not everyone has access to the Prod DB of course), or for checking calculation components. You just have to be careful and intelligent about them.