I've read that you can use solutions to create layers of customisations. I'm not 100% sure what this means.
Does it mean that I can layer customisations in a similar fashion I can layer graphics using photoshop? Then if I find that 1 layer is wrong, I can simply delete it, revealing the layers underneath it?
If that's how solution layers work in CRM, that is absolutely amazing, if that is not how they work. How do I delete a bad layer, to revert back to a previous state
CRM solutions are rather complicated and not as simple as you'd think, and there tend to be lots of gotchas. There are two different types of solutions, managed and unmanaged. Unmanaged changes sit on top of managed changes, so if you have an unmanaged change to an entity, generally, you can't deploy a managed solution change to override it. But managed changes are layered in the order in which the Solution was installed, last one in wins.
Managed solutions also allow you to uninstall, which removes the changes the Solution added. Unmanaged solutions don't provide a way to remove the components they add. You just have to remove them manually.
The current direction Microsoft is taking, is that unmanaged solutions are for dev work, and managed are for all other environments. They aren't quite there yet, and so I'm currently in favor of using unmanaged in most situations.
But, to answer your question, yes solutions are layered, and the order in which they are stacked (add) will effect the end result of the system. And removing a (managed) solution, will result in the other solutions added before, to potentially be seen
Related
With my current workflow I have to check in git and manually read changes like added functions after every import-jdl that touches an entitiy I changed.
Is there a way to add functions to the classes JHipster creates without actually changing the files? Like code generation with anotations or extending JHipster created clasees? I feel like I am missing some important documentation from JHipster, I would be grateful for pointers in the right direction.
Thanks!
I faced this problem in one of my projects and I'm afraid there is no easy way to tell JHipster not to overwrite your changes.
The good news is you have two ways of mitigating this and both will make your life much easier.
Update your entities in a separate branch
The idea is to update your entities (execute the import-jdl command) in a different branch and then, once the whole process is finished merge the changes back to master.
This requires no extra changes to your code. The problem I had with this approach is that sometimes the merges were not trivial and I still had to go through a lot of code just to be sure that everything was still in place and working properly.
Do not change the generated code
This is known as the side-by-side practice. The general idea is that you never change the generated code directly, instead you put your custom code in new files and extend the original ones whenever possible.
This way you can update your entities and JHipster will never remove or modify your custom code.
There are two videos available that will teach you (with examples) how to manage this:
Custom and Generated Code Side by Side by Antonio Goncalves
JHipster side-by-side in practice by David Steiman
In my opinion this is the best approach.
I know this probably isn't the answer you were looking for, but to my knowledge there's no better way.
We currently have 2 solutions that share several projects between them, as well as have some projects that are unique to each of them. We currently have a build definition for each of these solutions set to Gated Checkin.
Unfortunately, it seems that having multiple definitions with gated checkins set means that if I make a change to one of the shared projects, it only runs one definition. In a perfect world, I want it to build both solutions in this circumstance.
I know that I could just create a single build definition that builds both solutions, and this will work great in the scenario in question, but then if I am modifying a project that it unique to a solution, it will still build both solutions, ugh.
Is there a way to configure our builds such that we get the best of both worlds? I would like the consistency of insuring shared code correctly works on both solutions, but I also would like builds to not take double the time for changes that affect only one solution or another (by far our most common use case).
Or am I just stuck with the tradeoff of one or the other?
The basic problem with your current situation is "how to identify the change"? Whether it's the common project or the unique project that was modified. I dont think there is any EASY means of identifying this at the time of building the code.
One option which you is NOT THE BEST solution would be to separate out the common projects into another solution which compiles and put the DLL's to a common location which the unique solutions use. This way you can have 3 independent gated- checkins, if there is change to common solution you compile both unique solutions within the same build definition. If not you compile the common and the one unique solution in their own build def.
I understand the appeal of using the data-driven Entity-Component System for game development. Naturally I am trying to find other areas to apply this paradigm. As I am about to embark on developing a small business application, I've been wondering how well Entity-Component would fit in with it. However I cannot find any examples or discussions on using Entity-Component in anything besides games. Is there a reason? Would there be any advantages in using Entity-Component in software besides games?
I ended up taking a risk and trying to use ECS outside of the gaming domain (now as an indie, formerly company employee) and with results that astounded me. I wouldn't do things any other way now and have an easier-to-maintain system than ever before (not perfect, but so much better than the COM-style architectures we used to use in my industry). I took the plunge mainly because it seemed to provide answers for all the things me and my team in the past were struggling with using a COM architecture, though I imagined with such a risky move that I might just end up exchanging one set of problems for another (was willing to take the risk now that I was on my own). Turned out I didn't exchange one can of worms for another. ECS solved practically all those problems while barely introducing any new ones.
That said, I'm in the VFX domain and it's not that different from games. We still have to animate things like characters, emit particles, interact with meshes, textures, play sound clips, render the result, allow people to write plugins, scripts, etc.
To try to apply ECS in a business domain is far more ballsy. That said, I imagine it could really help create a maintainable system if you have relatively few systems processing a huge number of entity combinations.
Maintainability
What I found that made ECS so much easier for me to maintain compared to previous object-oriented approaches, and even within my personal projects, was that the previous approaches often transferred the maintenance overhead away from the clients using the classes to the classes themselves. However, there would be dozens of interfaces, hundreds of subclasses, all inheriting different things and implementing different interfaces to maintain individually. Testing also becomes difficult with so many granular classes and the need to do mock testing.
My brain can only handle so much and hundreds of subclasses interacting with each other was far beyond the limit. Very quickly I found myself no longer able to reason about what was going on, let alone when or where, overwhelmed by complex interactions leading to complex side effects, and never so confident that I could sandwich new code somewhere in there without causing unwanted side effects.
The computing scientist’s main challenge is not to get confused by the
complexities of his own making. -- E. W. Dijkstra
This applied even for projects I exclusively authored myself. There came a breaking point, typically after a few hundred thousand LOC or so, where I could no longer even comprehend my own creation. I'd refactor here and there, pick up a little momentum, only to take a vacation, come back, and be lost all over again.
ECS removed that challenge, and I don't mean to the degree that I can take a 2-week vacation, come back to the codebase, look at some code, and get the vision of crystal clarity that I had when I was writing it in the first place. ECS doesn't improve things that much in this regard and it still takes some time for me to reacquaint myself with code I haven't looked at in a good while. The reason ECS helped so much is that I didn't need to recall everything I wrote in order to extend and change the software. The systems are so decoupled from each other that it's not a huge deal if I forgot how one works exactly. I can just concentrate on what I need to do and not have to worry about complex interactions of side effects being triggered through complex interactions of control flow. I can just focus on what I need to do and not have to think much about anything else.
This applies even when introducing brand new core-level features integrated into the product. These days when I introduce a new central feature to the product, like a brand new audio system central to the product, the only thing I have to think much about is how to integrate it into the user interface. Integrating it into the architecture is relatively effortless compared to previous architectures I worked in.
Meanwhile with the ECS, I only have to maintain a couple dozen systems to provide no less functionality than the above. They do have some complex logic inside, but I don't have to maintain the hundreds of different entity combinations there are, since they just store components, and I don't have to maintain component types since they just store raw data and I rarely ever find the need to go back and change them (very close to never).
Extensibility
Being able to extend an ECS architecture in hindsight with central concepts is about the easiest thing I've encountered so far and requires the minimum amount of knowledge of how the existing codebase works.
As a very fresh example, I recently encountered a strong desire for scripters using my software to be able to access entities in the scene using a simple, global name. Before they had to specify a full scene path like, Scene.Lights.World.Sunlight as opposed to simply, Sunlight.
Normally in the previous architectures I worked in, that would have ranged from a highly intrusive to moderately intrusive change. A COM-style system revolving around pure interfaces might require introducing a new interface or, worse, changing an existing one, and updating a few hundred subtypes to implement the new functions. If we had a central abstract base class that everything already inherited, we might be able to modify that one centrally to implement this new interface (or the new parts of an existing interface), but it would likely be monstrous if there was a central base class for everything that might want such a name, and require wading through a lot of delicate code.
With the ECS, all I had to do was introduce a new component, GlobalName, with a system that processes GlobalName components and can find an entity quickly through a specified name. It also handles making sure that no two GlobalName components have a matching name. Due to the nature of the ECS, it's also very easy to pick up when this GlobalName component is destroyed as a result of an entity being destroyed or the component being removed from it to keep the data structure used to accelerate searches by name (a trie) in sync.
After that I was just able to attach this GlobalName component to anything that scripters wanted to refer to by a global name. They can also attach it themselves and then refer to a given entity later through that name. Components also serialize themselves in ways that preserves backwards compatibility for the most part (ex: previous versions of the software which did not know what GlobalName was will simply ignore it upon loading scene data referring to it).
It was about as painless and as non-intrusive of a change as I could change imagine considering that this was added very late in hindsight to a 4-year old software which did not anticipate the need for this whatsoever. And I managed to get it working just fine on the very first try. As a bonus, all the non-trivial code newly added to make this work lives isolated in its own space; it's not jumbled up with anything else and doesn't contribute to the complexity of anything else as would inevitably have to be the case if I used abstract interfaces or base classes. I did not have to modify anything central to make this work except a few lines of trivial script and some trivial GUI code to display these global names when available.
"Inherit Anywhere"
Have you ever wished you could extend a class's functionality from anywhere in your code without actually modifying its code? For example:
// In some part of the system exists a complex beast of a class
// which is tricky modify:
class Foo {...};
// In some other part of the system is a simple class that offers
// new behavior we'd like to have in 'Foo', with abstract functionality
// (virtual functions, i.e.) open to substitution:
class Bar {...};
// In some totally different part of the system, maybe even a script,
// make Foo inherit Bar's behavior on the fly, including its default
// constructor, copy constructor, and destructor behavior for Bar's state.
Foo.inherit(Bar);
The above leaves the question: where will the abstract functionality of Bar be implemented, since Foo doesn't provide such an
implementation? That's where systems analogically kick in for ECS.
I think the temptation will be there for most of us who had to wade through some existing class's complex code to just make it do a few new things while risking causing unwanted side effects/glitches/toe-stepping, or we might have even faced a temptation for a third party library outside of our control to just offer a little bit more functionality that we'd find very useful throughout the code using this third party library if it just provided "this one thing", or we might just hate the idea of having to change our colleagues' existing code (don't want to step on toes) even though we're tasked to provide new central behavior.
ECS offers you that kind of flexibility although in a very different way from the above example (but gives you the analogical benefits). It allows you to extend anything's behavior/functionality/state from anywhere. As in the above example of extensibility, I did not have to modify anything that exists to provide that global name searching functionality and state. I can extend the behavior of these entities from the outside, even from script, by just adding a new type of component to any entity I want at which point any systems I write interested in such components will then be able to pick up and process using a duck typing approach ("If it has a GlobalName component, it can be provided a global name which can be utilized to find a matching component very quickly").
Associating Data
Similar to the above, have you ever faced a temptation to associate data to existing objects in the code? In such cases we might have to maintain parallel arrays or associative containers like dictionaries/maps, and such code can be tricky to write correctly given that it has to stay in sync as new objects are added and removed.
ECS solves that problem at a central level, since now you can just attach components and remove components to/from any entity you want very efficiently. That becomes your means of associating new data on the fly. You no longer have to manually synchronize associative data structures.
Testing
Another issue for me just personally, and it may be because I never mastered the art of unit testing (though I did work with a colleague who really studied up on the subject), is that it never made me confident that a system was relatively bug-free. Integration tests gave me greater confidence in that regard. The problem for me was this: even if the unit test passes, how do you know the client will not misuse the interface? What if they use it at the wrong time? What if they try to use it from multiple threads when it's deliberately not designed to be thread-safe?
I get no huge sense of relief to see unit tests passing, since most of the bugs encountered had to do with what was going on between the interfaces being tested, and we had many incoming in spite of all of the hundreds of unit tests we wrote passing. I love test-driven development, and I did find value in the unit test of telling me that this one unit was doing what it was supposed to do which allowed me to use it more confidently throughout the codebase, but the unit testing never gave me a huge sense of relief about the correctness of the codebase as a whole.
ECS solved that problem for me and made unit testing much more valuable even to someone like me who never mastered the art of testing since there are a handful of systems, they each do their hefty share of work (not granular little objects), and they're concrete. If we have to do anything resembling mock testing, it's simply to insert the components/entities necessary to run them and test them. It starts to feel like testing a system is closer to integration testing than unit testing, even though the system is the smallest testable unit.
Homogeneous Processing
To apply ECS requires embracing a more loopy kind of logic with more homogeneous loops doing one thing at a time. A lot of OOP tends to encourage non-homogeneous control flows and complex interactions causing many things to happen in any given phase/state of the system. This was the most difficult part I found initially since I wanted to apply disparate tasks at one time to a given entity/set of components, and my temptation couldn't be satisfied so directly given decoupled systems which only perform one task at a time. So I had to learn how to defer processing, storing some state for the next system to use, and I also use (to a minimum) an event queue so that systems can trigger events which get processed by others.
Nevertheless, I found ways to program the equivalent of a complex interaction as a result of a series of simple loops doing one thing at a time. It never turned out to be as difficult as I imagined to force myself to work this way, applying one uniform task over one set of entities at one time. And after being forced to do this for a while and maintaining the results -- wow! I should have been doing that all along. It's actually kind of depressing reflecting back on a decade of maintaining architectures that were so much harder to maintain than they needed to be after getting the breath of fresh air that was the ECS architecture.
Interactions
This is a simplified "interaction" diagram (not necessarily indicative of direct coupling, as the coupling version would be from concrete objects to abstract interfaces) comparing the differences before and after I adopted ECS. Here's before:
Except that's just between a small number of types (I was too lazy to draw hundreds). And this was why I always struggled to maintain these things and felt tangled up in the code. It's because the interactions between the code were actually a tangled mess, leading you to all sorts of remote functions in the system causing side effects along the way. After (and now the components are just raw data, they contain no functionality of their own):
And the second version was so, so much easier to comprehend, so much easier to extend, so much easier to maintain, so much easier to reason about in terms of correctness, so much easier to test, etc. If your business architecture can effectively fit into the second type of model, I can't overstate how much it can simplify everything.
Invariants
One of the scariest parts to me when I started developing the ECS engine was the lack of information hiding. When components are just raw data, they're dangling what I thought should be their privates in the air for anyone to touch. This could be doubly scary in a business domain that might be more mission-critical in nature.
Yet I found invariants just as easy to maintain, if not more, due to the limited number of systems that access any given component (and typically if the data is modified, it only makes sense for one system in the entire codebase to do it), the extremely simple control flows, and the extremely predictable side effects that result. And it's pretty easy to test the codebase for correctness when you just have a handful of systems to worry about as far as functionality.
Conclusion
So if you are willing to take the risk, I think it could potentially be applied very effectively in certain business domains. The main thing I think is worth thinking about upfront first is if you can model the entirety of your software's needs as a handful of systems processing data stored in components, with each system still performing a bulky but singular responsibility (the analogical equivalents of a RenderingSystem, GuiSystem, PhysicsSystem, InputSystem, etc). Naturally the benefits of ECS diminish if you find you need hundreds of disparate systems to capture the business logic.
If you're interested, I can extend my answer in some later iterations and try to go over some of the minor struggles I faced with the ECS when I was completely wet behind the ears about it.
(Apologies for the necromancy)
Coming from an enterprise background, I have recently been considering this question. Entity-component systems are comparatively new, and represent a completely different design paradigm to what most business developers will have experience with.
Considering my own company's example, I have seen a few scenarios where an entity-component system would offer benefits.
For example, in our primary application, addresses are associated with contacts and organisations. (There are ContactAddress and OrganisationAddress joining tables in our database.) One client wishes to associate projects with addresses as well. There are many ways of achieving this, but an entity-component based approach would seem quite elegant to me - simply add an Addressable component to the Project entity, and the GUI should sort itself out.
Instead, we will likely be adding a new joining table and new data-input pages (albeit re-using common controls).
The primary disadvantage, I would think, would be (initial) lack of developer awareness of the best ways of applying this paradigm to business software, precisely because it doesn't appear to have been done before. Once you start with such an approach, you are committed to it - if it proves frustrating once your project reaches a certain complexity, there's no way out without a significant rewrite.
I am looking for a good way to keep a design document up to date with the latest decisions.
We are a small team (two developers, game designer, graphic designer, project manager, sales guy). Most of our projects last a couple of months. At the start of the project a design is made but we generally find ourselves making changes or new decisions throughout the project. Most of these changes are improvements, so we want to keep our process like that. (If the changed design results in more time needed this is generally taken care of, so that part is OK)
However, at the moment we have no nice way of capturing the changes to the initial design document and this results in the initial design quickly being abandoned as a source while coding. This is of course a waste of effort.
Currently our documents are OpenOffice/Word, and the best way to track changes in those documents will probably be adding a changelist to the top of the document and making the changes in the text in parallel — not really an option I'd think as ideal.
I've looked at requirements management software, but that looks way to specialized. The documents could be stored in subversion but I think that is a bit too low level to give insight in the changes.
Does anyone know a good way to track changes like these and keep the design document a valuable resource throughout the project?
EDIT: At the moment we mostly rely on changes to the original design being put in the bugtracker, that way they are at least somewhere.
EDIT: Related question
Is version control (ie. Subversion) applicable in document tracking?
I've found a wiki with revision logging works well as a step-up from Word documents, provided the number of users is relatively small. Finding one that makes it easy to make quick edits is helpful in ensuring it's kept up to date.
Both openoffice and word include capaiblities for showing/hiding edits to your document. Assuming there's resistance to changing, then that's your best option - either that or export to text and put it into any source control software.\
Alternatively, maintain a separate (diffable using the appropriate tool) document for change-description text, and save archive versions at appropriate points in time.
This problem has been a long standing issue in our programming shop too. The funny thing is that programmers tend to look at this from the wrong optimization angle: "keep everything in one place". In my opinion, you have two main issues:
The changes' descriptions must be easy to read ("So what's new?")
The process should be optimized for writing of the specification to agree upon, and then get to work already!
Imagine how this problem is solved in another environment: government law making. The lawbook is not rewritten with "track changes" turned on every time the government adds another law, or changes one...
The best way is to never touch a released document. Don't stuff everything into the same file, you'll get the:
dreaded version history table
eternal status "draft",
scattered inconsistencies,
horribly rushed sentences, and
foul smelling blend of authors' styles
Instead, release an addendum, describing only the changes in detail, and possibly replacing full paragraphs/pages of the original.
With the size of our project, this can never work, can it?
In my biggest project so far, I released one base spec, and 5 consecutive addenda. Each of around 5 pages. Worked like a charm!
I don't know any good, free configuration management tools, but why not place your design under source control? Just add it to SVN, CVS, or whatever you are using. This is good because:
1) It is always up to date (if you check it in, of course)
2) It is centralized
3) You can keep track of changes by using the built-in compare feature, available in almost any source control system
It may not be the 'enterprisish' solution you'd want, but you are a small team of developers anyway, so for that situation, it is more than perfect.
EDIT: I see now that you already mentioned a source control system, my mistake. Still, I think it should work well.
Use Google Docs. Its free, web based, muti-user in real time, you can choose who has access to your documents, and keeps versioning. You can also upload all your word documents and it will transform them for you.
For more information: http://www.google.com/google-d-s/intl/en/tour2.html
I have a complex sharepoint deploy with multiple EventReceivers and Workflows.
I also have schema changes to existing lists, adding new columns of metadata and changing existing columns.
Should I package a single feature, eventreceiver or workflow, to a single solution, or should I put multiple features inside the single solution since they all work together?
One major reason I am asking is for future code upgrades. If the features are seperated, then an upgrade in one portion of code would not require a re-deploy of all the features in the solution. Is this something I should worry about or does the "stsadmin -o upgradesolution" take care of any issues with the upgrade of a solution with many features?
Let me know if this makes sense to any SharePoint gurus out there.
Thank you,
Keith
Update:
Looking at the website drax referenced, I found this reference site: http://msdn.microsoft.com/en-us/library/aa543659.aspx
This statement seems to put a large handicap on upgrading features in solutions:
Solution upgrade can only be used to
replace files. You can add new files
in a solution upgrade and remove old
versions of the files, but you cannot
install Features or use Feature event
handlers to run code for Feature
installation and activation. The
following operations are not supported
in solution upgrade.
Removing old Features in a new
version of a solution.
Adding new Features in a solution
upgrade.
Updating or changing the receiver
assembly for existing Features in a
new version of a solution.
Adding or changing Feature elements
(Element.xml files) in a new version
of a solution.
Adding or changing Feature
properties in a new version of a
solution.
Changing the ID or scope of old
Features in a new version of a
solution.
Removing Feature elements
(Element.xml files) in a new version
of a solution.
Removing Feature properties in a new
version of a solution.
So... What can you do with a solution upgrade?
I would advise against splitting everything into multiple solutions. Maintaing that can quickly become nightmare. Try to structure your project, which should is used to create WSP, in same manner as 12 folder of sharepoint. Then you can use WSP builder, last stable version brings a lot of useful stuff.
Also i've not noticed any problems with redeploying solutions. According to this article and to my experience deployment of WSP takes care of synchronization between versions. So if you will add some new features they will appear and if you remove/change features they will be modified accordingly.
EDITED:
So I did some quick research on MOSS Updating topic. According to MS there are two ways of updating solutions:
In-place update
Incremental update
Basically, in-place update is standard way of updating. Meaning you are relying on build-in functionality as described in this (same document as posted before) document. Problem with this solution is that it lacks quite a lot of functionality (versioning, changing of ID's of features,...).
Incremental update (this is how MS calls it probably) don't rely on build-in solutions. That means it is up to everybody to implement it by themselves :(. What is even better I was not really able to find any guidelines for this approach. I suppose that approach you would like to take is example of incremental update (splitting project into many independent solutions).
Also note that Incremental update is not officially supported by MS.
So I don't really know what advice should I give to you. Single WSP is more maintanable than buch of them, also if you are doing just some minor changes updates work perfectly. But if you need to make some bigger structural changes problems start to show.
I'll probably wait and see if people with more MOSS expertise can say something about this topic.
Basically (for the reasons you've mentioned), you should think of solutions as you would .Net assemblies - atomic units of code that can be deployed separately from others. Using upgradesolution will cause a redeploy of all the contained features - if nothing's changed, then nothing should change for the sites that use that feature. But, if that makes you nervous, consider splitting it up.
UpgradeSolution is really handy if you are just updating the assembly and leaving the provisioned files intact.
Unless you specify -local then upgradesolution will perform a full iisreset across your infrastructure. This is really worth noting for when you are planning the right time to perform upgrades.