My objective is to have traceability between requirement, design, test case and test results of a project. Can any one give me the details of such an ALM tool . It should be an open source tool.
There are many tools for this. The first Question as always is: which programming language is this for? How big is the Team (including the specialist division which use these tools => requirements)?
Suggesting JAVA is the language, I'd prefer these tools:
Requirements: JIRA (not free but best!), Mantis or Bugzilla also may do an acceptable job
Design: depending on which design? To use UML a good choice had been TogetherJ (RIP => now part of Borland's toolbox); you may try ArgoUML or WhiteStarUML; using a Wiki I'd suggest e.g. DokuWiki and a good Office System is also a choice - depending on the needs within your team! (Yes a design always includes text)
test case: I'd like to split this topic a bit to “test planning” and “test execution” an last but not least “test documentation”
test planning : give TestLink a research
test execution: (free of charge!) Jubula, JUnit, Selenium => depending on your needs
test documentation: you should use a standard Editor like Word or Writer etc. (not the Wiki)
Additional perspectives:
build server: I've missed the build server within your list: if you code a piece of software how do you certain the software can be build also if a machine or a person refuse to work (on any reasons)? Building a software on the developer’s machine includes exactly the risc that the SW may not be buildable of another machine/by another person. So use a build server (where jenkins/hudson should be on your short list)
repository: according to the topic on saving the sourcecode within a CVS you probably also ensure to have an access to all the used external libraries you need within your program. Try artifactory or nexus
clearing process: If you work within a company’s team where the company’s strategy is actually to test a software before publishing the software you you’d think of a clearing process according to the test results. You should think about the group of people who should be involved within the clearing process of the software. Get them as partner into your project – otherwise it’ll be hard!
I hope the answer was helpful and fits to your needs?!
ALM is a huge topic and here we're discussing just a part of SDLC which is ONE topic in ALM.
We currently have 2 solutions that share several projects between them, as well as have some projects that are unique to each of them. We currently have a build definition for each of these solutions set to Gated Checkin.
Unfortunately, it seems that having multiple definitions with gated checkins set means that if I make a change to one of the shared projects, it only runs one definition. In a perfect world, I want it to build both solutions in this circumstance.
I know that I could just create a single build definition that builds both solutions, and this will work great in the scenario in question, but then if I am modifying a project that it unique to a solution, it will still build both solutions, ugh.
Is there a way to configure our builds such that we get the best of both worlds? I would like the consistency of insuring shared code correctly works on both solutions, but I also would like builds to not take double the time for changes that affect only one solution or another (by far our most common use case).
Or am I just stuck with the tradeoff of one or the other?
The basic problem with your current situation is "how to identify the change"? Whether it's the common project or the unique project that was modified. I dont think there is any EASY means of identifying this at the time of building the code.
One option which you is NOT THE BEST solution would be to separate out the common projects into another solution which compiles and put the DLL's to a common location which the unique solutions use. This way you can have 3 independent gated- checkins, if there is change to common solution you compile both unique solutions within the same build definition. If not you compile the common and the one unique solution in their own build def.
There is a repository of tests for the Mozilla addons site, although it's written using Selenium. I'd like to know if there are any real-world examples available for Watir, so I can see how the framework is implemented by professionals?
This is a more general question about how one goes about building a suite of tests for a website in Watir. On a superficial level, one can write a bunch of seperate .rb files with crude error reporting and fire them all off; but I'd like to know more about writing actual classes and proper test structures that raise issues and return reports. How is this done? Are there any books on this? Tutorials?
Check out WatirMelonCucumber - a set of watir-webdriver tests against google and bing, and also EtsyWatirWebDriver - a set of watir-webdriver tests against Etsy.com
The watir Wiki has a selection of tutorials, examples etc as well.
Start Here
Learning More
Wiki homepage
Those are however fairly basic and don't get into the 'how to organize things' level.
In that case there are a number of frameworks in various states of development. The most active ones are I think are perhaps Taza, and QA Robusta. Each of them approaches things a little differently. QA Robusta is wrapped a bit around Minitest (if I understand things right) and provides it's own reporting. I'm still learning about Taza so can't really comment on it much. I also recall hearing about a 'WatirSpash' gem/framework that was discussed in a recent watir podcast which is designed to help watir use along with RSpec (and I might presume Cucumber)
If you are a BDD/Spec-by-example sort, then you may want to use either (or both) RSpec or Cucumber perhaps in combination with the WatirSpash gem as a way to organize and describe you tests, and then implement the actual test code via Watir, In that case you would likely be using the HTML based reports that can be generated by RSpec/Cucumber instead of rolling your own or depending on a watir framework for the reporting.
More Watir frameworks:
https://github.com/jarmo/WatirSplash
https://cyberconnect.biz/opensource/qa_robusta.html
Not in active development:
https://github.com/scudco/taza
https://github.com/bret/watircraft
QA Robusta most likely will not have too many new features added but will be supported. Instead you may want to check out whirlwind. Whirlwind uses similar concepts as other frameworks such as qa_robusta and taza, but is lighter weight and tailored around cucumber/rspec. See the walk through for a google search example.
I would like to know if there is good practices, good tools to move a bunch of windows makefile projects to some msbuid (VS 2010) format?
If you think that' not a good idea to make it using a tool, maybe you do know something like a dependency analyser to make a checklist?
Having recently converted a legacy "make" based build to MSBuild, I'd have to say that there is no real easy way. Granted, the legacy build I was working on was actually calling msbuild to build .sln (I believe that the build engineer that put the other process in place was Old-Skool, and was using the toolset that best suited him, rather than .Net).
However, what I noticed was that the make (specifically nmake.exe/build.exe) tools were directory based - subdirs were "built" before parent dirs. Whereas that is not the case for msbuild - it's solution and project based.
Get your code into Visual Studio projects, living in a "flat" directory structure (having all projects as children of a single "Source" folder will really make your life easier in the long run - don't have projects that live several dirs "down the tree"
Use multiple solutions to break the build into "tiers" - order the build of the slns into a helper .bat file - this will help you in the long term to convert to TeamBuild
(my answer started to get out of control - your question reminds me of that joke with the American visiting Ireland who gets lost, and he asks a local "how do you get to Kilarney?", and the local replies "well, I wouldn't start from here".. Can you give a bit more detail about what you are actually buiding? Is it .Net code? I'm sure there is countless advice I and others could give you, but don't know what you are working with)
I've never been a big fan of MFC, but that's not really the point. I read that Microsoft is due to release a new version of MFC in 2010 and it really struck me as odd - I thought MFC was dead (no ill intention, I really did).
Is is MFC used for new developments? If so, whats the benefit? I couldn't imagine it having any benefit over something such as C# (or even just c++ using Win32 APIs for that matter).
There is a ton of code out there using MFC. I see these questions all the time is this still used is that still used the answer is yes. I work in a very large organization which still employs hundreds of people who write in cobol. If it has ever been used in the enterprise it will continue to be used until there is no more hardware to support it, then some company will pay someone to write an emulator so that the old code will still work.
The navy still uses ships with computers with magnetic cores for memory and I'm sure they have people to work on them. Technology once created can never not be supported. its a bit of the case of Deus ex machina where large organizations aren't completely sure what their system do and have such an overriding sense of fear of brining the enterprise to its knees they have no desire to try out you new fangled technologies(BTW we pay IBM for best effort support on OS2).
Also mfc is a perfectly acceptable solution for windows development given it is an object model which wraps the System API which is pretty much all that most people get out of .net.
As an addendum and since this question is up for a bounty this is a quote from MS regarding mfc in VS 11
In every release we need to balance our investment across the various areas of the product. However, we still believe that MFC is the most fully-featured library for building native desktop applications. We are fully committed to supporting and maintaining MFC at a high level of quality. Here’s a short list of some of the issues that we fixed in MFC for Visual Studio 11:
Here is the link if you want to read the full post
Coolness does not factor in choosing the technology for a new system. Yes if you are a student or want to play around you choose whatever you want.
But in the real world each technology has advantages and drawbacks. A year ago one of the teams started a new project, it was decided that it will be done in MFC.
The reason is very simple: they have to use windows api a lot for low level operations with the printer, internet explorer and god knows what else.
C# was not even in the game, the decision was made between MFC and QT, both had the needed functionality, both could easily integrate the low level functionality, the only difference was that some team members already had MFC experience, so they didn't have to waste time and money with trainings.
Let's suppose they choose C# and WPF:
-1 You have to wrap all native C++ and ASM code in a DLL (ouch this can be painful, instead of coding you write wrappers).
-1 You probably need two teams now, one for the ui one for the winapi stuff. It is very unlikely that you'll find a lot of people able to write both C# and winapi stuff. Agreed that either way you need someone to make the interface pretty (programmers usually suck at this and they cost more) but at least with C++ only code, there is no more wait time between two teams, need a ui modification, no problem I don't have to wait for the ui designer, he will make it pretty later.
+1 You can write the UI code in C# and WPF, let's say the UI development is faster, but the UI is only 1/4 of the project, so the total gain is probably very small.
-1 Performance degradation: for every small operation you can't do in C# you call a external DLL (this is a minor issue since the program runs on 8GB RAM Quad Cores).
So in conclusion: MFC is still used for new development because the requirements and the costs decide the technology for a project and it just so happens that MFC is the best in some cases.
MFC is still used for some new development, and a lot of maintenance development (including inside of Microsoft).
While it can be minutely slower than using the Win32 API directly, the performance loss really is tiny -- rarely as much as a whole percent. Using .NET, the performance loss is considerably greater (in my testing, rarely less than 10%, with 20-30% being typical, and higher still for heavy computation. Just for example, I have a program that does Eigenvector/Eigenvalue computation on fairly large arrays. My original version using C++ and MFC runs one test case in just under a minute on our standard test machine. Some of my coworkers decided it would be cool to re-implement it in C#. Their version takes almost three minutes on the same machine (quad core, 16-gigs of RAM, so no, not "legacy" hardware). I'll admit I haven't looked at their code too closely, so maybe it could be improved, but they're decent coders so a 3:1 improvement strikes me as unlikely.
With MFC, it's also easy to bypass the framework and use the Win32 API directly when/if you want to. With .NET, you can use P/Invoke for that, but it's quite painful by comparison.
MFC has been updated with every release of Visual Studio. It just isn't the headline feature item.
As for new development, yes. It is still used and will continue to be so (even though I, like you, prefer not to). Many organizations made the technology decision years ago and have no reason to change.
I do think you are talking about well-established shops though, folks with more interest in maintaining / enhancing what has been written rather than stay on the cutting edge.
The release of the MFC Feature Pack (one or two years ago, iirc) was the biggest extension of MFC since around 10 years and it gave quite a new boost to MFC development. I guess a lot of companies decided to maintain their legacy applications, push them forward and delevelop new applications on its basis.
For me (as someone who has to maintain a large MFC application) the bigger problem is the decreasing development and support of (Microsoft and third-party) components rather than MFC itself. For instance is porting to 64bit not easy if a lot of old and unsupported pure 32bit Active-X components are assembled in the application.
I did a project last year based on MFC. I'm not sure why MFC was chosen, but it was adequate for making a virtual 3D graphic user interface—a building management security system—with 10 frame per second refresh rate run efficiently on win32-based PCs dating back to the mid-1990s. The executable (which requires only core win32 system DLLs) is less than 400K—not an easy accomplishment with modern tools.
There are advantages to staying away from managed code (maybe you're writing a driver UI, or doing COM).
That and there's tons of MFC code out there. Maybe you work for Company X, and need to use one of the zillion DLLs they've been writing over the last dozen years.
I can think of one commercial software title that benefits from using MFC over C#: Wwise[1]. C++ is an obvious choice for the sound engine, so it makes sense to write the authoring tool in C++ as well. It's both an authoring tool and a sound engine. They could have built the authoring tool in C#, and the sound engine in C++, but if they're debugging a problem with the sound engine that's reproducible through the wwise authoring tool, it's easier for them to see the whole call stack just like that.
I think there's some ways of doing a mixed call stack nowadays, but maybe that wasn't there when they first made Wwise? In any case, using MFC ensured that they wouldn't need a solution to the problem of mixed call stacks. The call stack just works.
[1]Wwise is built on MFC: https://www.audiokinetic.com/fr/library/edge/?source=SDK&id=plugin_frontend_windows.html