Enable only needed Reshaprer features - resharper

I used Resharper for a long time, but I removed it a while ago, because I hit the level where it become more of a distraction that lowers my productivity. I don't need it to hold my hand anymore and I really like my Visual Sudio as lean as possible.
However from time to time I encounter problems where I could make a very good use of a feature or two from it.
Is there a way to install Resharper, without it actually having an effect on anything and when the need arises for a specific feature I enable it one by one?

Related

Is haskellmode-vim dead?

I just disabled haskellmode-vim from my plugin configurations. Basically this was for three reasons:
I prefer neocomplcache for my auto completion needs.
Apparently it wasn't updated since 2010.
It doesn't seem to be compatible with cabal
I hope that someone jumps in the pit and points out that I just have misconfigured the whole thing (as in I configured the most basic thing in the readme). To make this a question:
Is it possible to setup haskellmode such that ...
... it gets its configuration from cabal?
... it doesn't set `completefunc' so that neocomplcache still works?
Author here. I haven't had much chance to work with Haskell since 2010, so haskellmode for Vim has not been developed since then, either.
I used to think someone must have written something better since, or that my old code probably doesn't work with newer releases, but every few months, someone mails me telling that they are still using this plugin and it still works for them (which is a mix of pleasant surprise and uncomfortable reminder of the lack of development/maintenance).
Some of them have created clones on github (last time I checked, there were about a dozen), usually to accomodate the latest fashion in Vim plugin management (there may have been small hacks to make it build via cabal, but I recall no complete integration). Vim gives you a lot of control over the order of plugin loading, if you want someone else to override the completefunc.
I still expect haskellmode-vim to drop out of usage sooner or later. However, if someone were to step forward willing to take on maintenance for one of the github clones, that would be fine, too.
As long as credit is given, and modified plugins are marked as such, I'm also happy to see ideas from haskellmode-vim used in other plugins (there used to be a happy exchange of such ideas between vim and emacs haskell plugins), so more modern and active plugins could absorb any missing features from haskellmode-vim.

Is it possible to resize fixed dialog boxes that are part of Visual Studio?

In Visual Studio 6 the project settings dialog box is not resizable. Is there a reason for it to be so?
I know this is a long shot, but any trick to "fix" this problem?
I know this is a long shot, but any trick to "fix" this problem?
The "trick" is to edit the resources for the executable manually and make the dialog boxes resizable.
But that doesn't actually solve the problem, or someone at Microsoft would surely have done that for one of the many intervening versions of Visual Studio between 6 and 10. In fact, this very thing has been repeatedly suggested on Microsoft Connect and UserVoice (among other places) as a substantial usability problem, and there seems to be general agreement about that fact.
The real problem is that you have to write code to get the controls on the dialog box to automatically resize when their container (the dialog) is resized. Since that's non-trivial, it hasn't been done yet for any of the new releases of VS. And there's no way to go back and do it on one of the old versions, since editing code in a compiled executable is something I wouldn't recommend to anyone.
It still might happen in a future version of VS, which should be a compelling reason to upgrade for everyone, much less people that are still using VC++ 6, a product released nearly 15 years ago.
Keep your fingers crossed.
As far as adding it to VC++ 6 now, it's simply not possible. Your best hope would be a third-party extension that replaces the dialog with a resizable one. It won't be exactly equivalent to the built-in dialog, though, and it'll be very hard to find such add-on utilities now, given their age. You'd probably need to visit a museum.

Is PetraVM Jinx Beta 1 good?

PetraVM recently came out with a Beta release of their Jinx product. Has anyone checked it out yet? Any feedback?
By good, I mean:
1) easy to use
2) intuitive
3) useful
4) doesn't take a lot of code to integrate
... those kinds of things.
Thanks guys!
After literally stumbling across Jinx while poking around on Google, I have been on the beta and pre-beta tests with a fair amount of usage already under my belt. As with any beta related comments please understand that things may change or already have changed, so do keep this in mind and take the following with a grain of salt.
So, going through the list of questions one by one:
1) Install and go. Jinx adds a control panel to Visual Studio which you can mostly ignore as the defaults are typically good for most cases. Otherwise you just work normally and forget about it. Jinx does not instrument your code, require any additional libraries linked in or the numerous other things some tools require.
2) The question of "intuitive" is really up to the user. If you understand threaded code and the sorts of bugs possible, Jinx just makes those bugs happen much more frequently, which by itself is a huge benefit to people doing threaded code. While Jinx attempts to stop the code in a state that makes the problem as obvious as possible, "obvious" and "intuitive" are really up to the skill of the programmer.
3) Useful? Anyone who has done threaded code before knows that a race condition can happen regularly or once every month based on cosmic ray counts, that randomness makes debugging threaded code very difficult. With Jinx, even the most minor race condition can be reproduced usually on the first run consistently. This works even for lockless code that other static analysis or instrumenting tools would generally miss.
This sort of quick reproduction of problems is amazingly useful. Jinx has helped me track down a "one instruction in the wrong place" sort of bug that would actually hit once a week at most. Jinx forced the crash to happen almost immediately and allowed me to focus on the actual cause of the bug instead being completely in the dark as to the real source.
4) Integration with Jinx is a breeze. If you don't mind your machine becoming a bit slow, you can tell Jinx to watch the entire machine. It slows the machine down as it is actually watching everything on the machine, including the OS. While interresting and useful if your software consists of multiple processes on the same machine, this is not suggested as it can become painful to work with the machine.
Instead of using the global system, adding an include and two lines of code does the primary work needed of registering the process with Jinx such that Jinx can watch just the registered items. You can help Jinx by using the Jinx specific asserts and registering regions of code that are more important. In the case of the crash mentioned above though, I didn't have to do any of that and Jinx found the problem without the additional integration work. In any case, the integration is extremely simple.
After using Jinx for the last couple months, I have to say that overall it has been a great pleasure. I won't write new threaded code without Jinx running in the background anymore simply because it does its intended job of forcing obscure threading issues to be immediate assert/crashes. As mentioned, things that you could go weeks without seeing become problems almost immediately, this is a wonderful thing to have during initial test and implementation.
KRB
BTW, PetraVM has changed its name to Corensic and you can find Jinx Beta 2 over at www.corensic.com.
--Prashant, the marketing guy at Corensic

One big release or several small ones?

When you're working on enhancements to an existing line of business application, do you think it's better to batch up changes into less frequent bigger releases, or continually ship new features in smaller releases? Assuming there are hardware upgrades or database upgrades, do you make these changes with the releases as well, or keep them separate?
Releasing everything together has the advantage that there's less disruption to the business, and less out of hours work invovled, but any problem you later encounter could be due to the database upgrade, the hardware, or any number of software changes.
Releasing little and often makes it easier to track down any problems resulting from a release, but leads to more disruption and more time spent regression testing.
Which is better?
Consider how each release affects the customers. Will frequent small releases make them happier because of solving critical problems faster? Will this improve your sales and reputation? If it will carefully estimate if these benefits outweight the extra work done, otherwise just follow the path which is more convenient for you.
It really depends on the environment your in. Some scenarios:
Many customers: You want all customers to have the same release, as far as possible. It is much easier to have anual, semianual or quarterly big releases, as the testing and rollout coordination is very costly. In this case I would include db changes as well.
"Big infrastructure": if working in a large company environment with dedicated personell for operating systems and databases, again the overall cost of a release is big and therefore less frequent larger releases are better.
For short, calculate the costs of a release in man power, business interuption, coordination, testing and bring it into relation with the benefits of each new feature or bug fix.
I usually tend to have 1-2 big releases a year and bug fixes inbetween for show stoppers.
I think the best answer is: a mixture of the two.
For instance, if you added some eye candy, or made the name textbox more "ajaxy", or maybe threw in a new type of report - make this as "small" releases. Release early, and release often, as possible.
On the other hand, if you changed a user-facing process forcing the users to be "retrained", OR if you are requiring massive infrastructure changes - go for a big release, and make this as seldom as possible.
As you said, if there is little or no disruption, do as often as possible, your users will be the happier for it - AND you will actually be spending LESS time on regression testing, because you only have to test everything connected to the changes you made.
I think its better to work it into one big release and patching up as you need to fix problems down the line.
As a developer you need to anticipate possible problems and breaks in your system to make it as robust as you can. That usually means much testing beforehand.
Also consider that the end user may not want to pay for the smaller increments in the product and they may as well wait for a big update. A good example of this is when I got Adobe Photoshop, they seem to release a new version every year so i simply waited until it seemed the time was right
A smaller number of releases means you'll be finding all your bugs at once. It becomes harder to know which code change caused which bug. You then have more of an issue with cascading bugs - one code change you made months ago has caused a bug, but in the meantime, you've made another five changes that all depend on the bug you introduced months ago.
Smaller, high-quality releases are better. Smaller releases make it easier to have high quality.
Personally, I favor big releases.
I have software at home which presents updates multiple times a week. It's annoying because there is no auto-update feature, just a notification that keeps coming back.
You might want to take a look at this similar question: How Often should you release software updates
In fact - both.
Can you possibly split development into DEVEL and RELEASE branch? Any urgent issues should be done ASAP on REL branch and sent out to users as a hotfix. After applying hotfix to REL branch, the need to apply patch would be sent to DEV team (note: to fix some issue on REL you have to write some quick code, while in DEV branch you need to put some time into rethinking the proposed fix, since the conditions in DEV branch might have changed, so it is common you would write completely different code to fix the same issue in DEV or REL branch).
When development of brand new version will be done, you have to test new features and patches migrated from REL. If everything is ok, you will be able to deploy brand new big version, and archive the current DEV into REL, while old REL will be now sealed off.

How do you balance the conflicting needs of backwards compatibility and innovation?

I work on an application that has a both a GUI (graphical) and API (scripting) interface. Our product has a very large installed base. Many customers have invested a lot of time and effort into writing scripts that use our product.
In all of our designs and implementation, we (understandably) have a very strict requirement to maintain 100% backwards compatibility. A script which ran before must continue to run in exactly the same way, without any modification, when we introduce a new software version.
Unfortunately, this requirement sometimes ties our hands behind our back, as it really restricts our ability to innovate and come up with new and better ways of doing things.
For example, we might come up with a better (and more usable) way of achieving a task which is already possible. It would be desirable to make this better way the default way, but we can't do this as it may have backwards compatibility implications. So we are stuck with leaving the new (better) way as a mode, that the user must "turn on" before it becomes available to them. Unless they read the documentation or online help (which many customers don't do), this new functionality will remain hidden forever.
I know that Windows Vista annoyed a lot of people when it first came out, because of all the software and peripherals which didn't work on it, even when they worked on XP. It received a pretty bad reception because of this. But you can see that Microsoft have also succeeded in making some great innovations in Vista, at the expense of backwards compatibility for a lot of users. They took a risk. Did it pay off? Did they make the right decision? I guess only time will tell.
Do you find yourself balancing the conflicting needs of innovation and backwards compatibility? How do you handle the juggling act?
As far is my programming experience is concerned, if I'm going to fundamentally change something that will prevent past incoming data to be used correctly, I need to create an abstraction layer for the old data where it can be converted for use in the new format.
Basically I set the "improved" way as default and make sure through a converter it can read data of the old format, but save or store data as the new format.
I think the big thing here is test, test, test. Backwards compatibility shouldn't hinder forward progress.
Split development into two branches, one that maintains backwards compatibility and one for a new major release, where you make it clear that backwards compatibility is being broken.
The critical question that you need to ask is wether the customers want/need this "improvement" even if you perceive it as one your customers might not. Once a certain way of doing things has been established changing the workflow is a very "expensive" operation. Depending on the computer savyness of your users it might take some a long time to adjust to the change in the UI.
If you are dealing with clients innovation for innovation's sake is not always a good thing as fun as it might be for you to develop these improvements.
You could alawys look for innovative ways to maintain backwards compatibilty.

Resources