We have a project with a single code base which we build both on Windows and Linux. And we want to run Klocwork code analysis on both Windows and Linux. Currently our approach is:
We have set up one KW project in the web UI
Inject and build on Linux, push the results to the server, save the report
Inject and build on Windows, push the results to the server, save the report
It somehow works, but the problem is that latter scan effectively overwrites results of the first one. If we save report directly after push, then we can still have a saved copy, but if developers want to triage/analyze the hit which is present only in the first build (i.e. some Linux-specific code), then it's almost impossible because KW has already marked this hit as "obsolete" (because it was not present in Windows scan)
Having two projects is not really an option, because 90% of the code is shared and it will cause huge overhead of developers to triage the same hits twice.
There are multiple options to achieve your goal.
Option 1: There is a tab under projects called Builds. That can give you the build chain report. here you can see the reports of previous builds.
Option 2: Are you using Klocwork desktop tools (Plugins/kwcheck)? If yes, developer will be notified automatically about the new defects/issues that he has produced at his machine. So there may be no question of reviewing Klocwork portal by a developer just to see what are all the issues he has created.
Option 3: I see you have mentioned 90% of the code has been shared. Is that meant, your project needs windows dlls and Linux libraries together to build your project?.
If the Answer is YES, please do let me know. i will think about some possible workarounds.
If the Answer is NO, then creating kwtables is one time job and from second time onwards Klocwork can perform incremental analysis (kwbuildproject ....... --incremental)
Option 4: Creating multiple project is not a bad option. Existing project settings can be replicated and the issue status can be sync. When you push the results to Klocwork server, the results will be pushed from Build Machine to Klocwork web/database server and it creates /projects_root/My_Project/builds/My_Build_Name/ directory. So, maintaining two Klocwork projects wont make much of a difference.
Option 5: Schedule a call with Klocwork support team. They will be happy to assist you with the best possible way.
I hope this helps.
Related
We are maintaining code for one of our clients.
Initially, we copied all the source code that they have and added it to our TFS 2012.
We modify the code any time they need a bug fix and give the client deployment packages.
Now, client wants all the latest code in their TFS 2012 as well.
Is there a way to update their source code with our changes? ...
preferably automatically (i.e. power shell script) and preferably with history of changes.
There are many approaches each with some pros and cons. The following are the main options I would suggest.
Database backup and restore
This is the only path that guarantees full fidelity. It has some technical difficulties (e.g. SQL Server version and editions) and political (how much information you care to expose, how much effort you want to put in sanitizing your data).
Project synchronization
There are some tools, most notably the Integration Platform, that use the API to read and reply the changes from one system to the other. It requires that the syncing tool can see both systems via HTTP(S).
It gives you the flexibility to project only some data (say source code not work items).
Keep in mind that you will always loose something in the process: the Changeset number will never match, some users details.
Dumb dump
Give up conserving full history and be content to share the code.
This is the simplest to implement: get all the code, ship and check into the other system. You can associate release notes in the check-in.
Two simple scripts using TF.exe is all you need.
You can use TFS Integration Tool to achieve the code migration(TFS-to-TFS). TFS Integration Tool moving data between two different servers. The migration is done through the APIs of TFS, and there also some limitations.(Check the above link for more info)
Detail steps please see my answer in this question: Move Team Project to another Project Collection TFS 2013
I really don't know how to better explain this situation I'm running into but I'll try my best.
This is what I'm trying to do...
I currently have a batch file that does some set of actions such as downloading files from FTP, Folder creation, modifying some text files etc. This was taking us 45 minutes to do manually and with the batch script automation it's easier.
The next step I want to do is, launch our .net Windows application, login to it and do some actions in it and then log out of it. This is actually a regression test case which I've automated using VS Coded UI on another machine. The problem I'm running into is, there is a separate support team who will need to do that 45mins of job which I've already automated followed by some actions after logging into the application. That support team's machines will not have VS or Coded UI installed in it.
So, how do I go about it? Any idea, please?
You can execute the CodedUI tests you've written without Visual Studio/CodedUI being installed on the machines. Remember that you can run tests through controllers and agents in TFS or Microsoft Test Manager. You can take those principles to run them manually, even if it's a strange corner case. This takes two steps, if I recall correctly:
Design your CodedUI tests to reference the CodedUI .dlls as part of the solution, rather than the GAC. By this I mean copy and paste the required .dlls into a solution folder and replace the existing references with ones that point to the .dlls in the folder. When you distribute your tests, be sure to include this soln folder of course. (UPDATE: After some more experience wit this, I've found it much easier to use NuGet Packages instead. Project level references are an absolute NIGHTMARE)
Install the "Test Agent" software that Microsoft provides for free on the tester's machine. This will install the other testing .dlls your tester will probably need in their GAC. You could do step 1 with these as well but to be honest I think this is less trouble. In addition, it installs the necessary mstest executables.
Your testers will then have to use mstest.exe (UPDATE: Once you've installed agents, you can/should use vstest.console.exe as an alternative) in the console to run your CodedUI tests. Alternatively you canprobably use powershell or your batch files to execute your tests and your other tests in one neat package.
Please let me know if this gave you a potential solution for your problem.
I have created a number of (standalone) automated test-cases captured using CodedUITest in Visual Studio 2013, to test webpages.
They work fine within Visual Studio, individually; and when several of them are put into a single project as a solution, to create a kind of playlist.
However, I'm trying to use Microsoft Test Manager as a 'front-end' - in order to be able to select which tests, create play-lists, decide how many times a specific test-case should be run etc, with the results stored.
TFS is being used to both store my (individual) test-cases, and, where I'd like to deposit the resultant output of pass/fail etc.
Trouble is, even though the test-automation part functions very well within Visual Studio, getting Microsoft Test Manager to function with what I have, and its associated environment is proving a COMPLETE NIGHTMARE.
My system is simple; I have a virtual machine setup with the testing environment which allows (within VS) for me to run these automated tests.
Why is this proving so difficult to work with MTM? It should be easy - I should simply be having to point MTM at the folder my test-cases are stored, and use its GUI to tell it what and how many tests I want to run.
Anyone else have a similar problem, or a similar setup?
All MS do is point me to (countless) pages which I've already read - for which the whole lot seems to be much deeper than it needs to be.
You cant just point MTM at a folder and tell it to run the tests as it would have no idea where to put the data. The results of each codedui running are associated with a corresponding test case in mtm.
You also need to have an automated build create the output (your assemblies) for you. Idealy everything goes together with your application. As your application changes, so will your tests.
You should add your CodedUI projects to the same solution that is used to build the application that you are testing. Then when the automated build for that application kicks off your code is picked up to. Both things, test and application, end up in a drop location. It is that drop location that MTM will use to find your test assemblies.
If, while you have the main solution open, you open/create a Test Case you can go to the automation tab in Visual Studio and associate one of your CodedUI tests with that Test Case. The test case will then show up as Automated in MTM.
Now that we have the versioned bits and know where to find them MTM needs somewhere to run them. If you open MTM and switch to the Lab center you can create a Standard environment to run your tests. This will automatically go install the agents required so you wikk need admin on those boxes.
Now that we have both versioned tests and an environment you can find the Test Case in MTM and see that it is "automated". If you right-click on it and say run you will get a box requesting an environment, and which version of the bits to go run. It will then go off and run those tests against that environment and feed the result back into the Test Case.
Does that help?
You'll need to setup Test controller and Test agent and associate you build with a Test Plan in MTM.In Test Lab you need to associate Test Agent to the Test Controller.
Once done you'll need to allow File & Printer sharing exception
And then you will have to add Test Controllers and Agents to Appropriate Groups.
Control Panel->All Control Panel Items->Administrative Tools->Computer
Management
I've got a custom build configuration setup in my V.S. 2012 Win Phone 8 solution (distinct Configurations other than just Debug and Release) and am using SQLite (which cannot be built against the AnyCPU Platform) and cannot find a way to run the test kit since it insists on a "Release + AnyCPU" build. Copying the contents of the actual output folder to "Bin\Release" doesn't seem to satisfy the test kit (nor did setting the output folder in the project file to "Bin\Release" for the appropriate configuration) and I can find no way to change the test kit settings/path. Is this simply impossible or am I just missing something?
I have read this post but it's less than helpful and also incomplete (that post doesn't say anything about custom build configurations). I have also sent negative feedback here asking a similar question (but of course that's unlikely to be replied to).
I was going to add "store-test-kit" and "custom-build-config" tags to this but don't have the reputation to do so so I've added them in this note instead in case it helps with future searches.
My solution combines a number of projects spanning from Windows 8 Desktop to Windows 8 Phone (and including DLLs and other supporting dependecy projects including SQLite). In order to build only what is needed I drastically customized my build configurations. So e.g. I have:
N-Dbg, N-Rls, N-Str and P-DBG, P-Rls, P-Str as build Configurations. These distingush from the Desktop ([N]ot phone) and [P]hone primary projects and the Dbg (debug) Rls (Release) and Str (Store) build sets. I have a distinct Str build so that the live store code bits are excluded from the Release build which also leaves out all the extra Debug features.
As complicated as it all sounds it works really quite well in practice, until I need to build a "Release" version. (And this is completely ignoring the platform part of the equation for which just selecting ARM might suffice, I really can't say because I have no good way to test it at the moment).
Ideally then what I want the Store Test Kit to test against is the P-Str build config. I tried setting the configuration to output that set of files to "Bin\Release" which is where a brand new standard build config puts "Release\Any CPU" builds, but the Store Kit didn't like it. It doesn't even list anything for a path to the XAP file that it's supposed to be testing.
After my initial post I did think to try making a new Release build config that was just setup the same as the P-Str build config but that also did not seem to make the Store Test Kit happy (though I admit that I am not 100% sure that I added the new build configuration completely accurately since it had been a long time since I created the custom config in the first place).
Does anyone out in the S.O. world have any experience with such a completely custom build configuration that might be able to help me out here?
I guess after almost a full year if no one has found a better way I'll take the advice to call this the answer...
I was very close with the idea of copying the files to the Bin\Release location but there were a couple missing pieces. First I had to add a Release+AnyCPU entry to the project file (even though I never actually use it) apparently so that the Store Test Kit knows where to look for the xap. Second, I had to be sure to rename the xap file in the Bin\Release folder to ProjName_Release_AnyCPU.xap. Once these things are done I can run the Test Kit.
We are starting with Sharepoint development with a team of three and are currently setting up our development environments. We would like to avoid installing a Server 2008 for each developer, thus a single terminal server has been setup, using Remote Windows to start a VS2008 instance on each developer's machine. Now we would like to separate developers' testing environments (i.e. a different site colletion per developer), but have realized that the assemblies would need to be installed into GAC to show properly on the site. But since there is AFAIK only one GAC, developers wouldn't be able to test their stuff independently.
Is there any way we could create separate testing environments without installing a bunch of 2008 Servers?
So you're all going to remote in an fire up Visual Studio and be compiling stuff and restarting IIS, etc?
You're going to be stamping on each other's toes.
A wiser choice nowadays is to use Hyper-V (or some other virtualisation).
We use Windows Server 2008 on our laptops, and use Hyper-V to run our dev environments. We then have a dev environment (sandbox) each, and these have VS2008, SVN, Nunit, etc.
Our code is tested against each other thanks to CruiseControl on the only shared Hyper-V.
This has been great for us... we distribute the load, we can work on the move, we don't step on each others toes and if we need to do a demo we can switch Hyper-Vs and demo from the demo Hyper-V (branched from the dev one early on so that the environments are known).
Go virtual and don't look back.
PS: I've just seen your comment about one server... just put Hyper-V on that and run 3 instances. That's also what we do ;)
I don't know about installing the server on everything, but this sounds like an ideal task for Virtual Machines rather than physical ones- where I work we using VMWare a whole lot for this kind of work and it does very well.
It's also useful to be able to roll back to a snapshot when it comes to testing installation processes and so on.
No. In addition to the GAC there are all the SharePoint files in the 12 hive, such as features and site templates. It's not worth what you save on server costs.
(Of course if you don't use the GAC, but deploy to the bin folder, and you don't touch anything in the 12 hive, you can give each developer their own web application on the same server. But this approach puts a lot of restrictions on what they can do. It's still not worth it.)
Virtual machines will work, but they can be slow to develop on. For instance, you'll need to restart the application pool for every GAC deploy - which means a pause of maybe 15-60 seconds to reload the application, (depending on the hardware). This will become annoying.
Virtual machines work better for test and production, where you don't restart the application so often.
I recommend a physical server for each developer. This will minimize the code-deploy-test cycle time, and make sure they don't have to worry about stepping on each others toes.
You are on the wrong track with Terminal Services - its just not going to give you any separation.
A lot of people do recommend developing on W003/2008 server directly, and it does simplify some things like remote debugging.
I prefer the more traditional method of using VMWare to run virtual machines. These can be running on a local or remote host. Remote debugging is a little more complex to setup but still possible.
Finally - if possible then deploy to the bin dir rather than the GAC. This will make it much easier to deploy automatically after compilation.
The contributors are right that there are lots of stumbling blocks to multi-developer single server environments.
Number one developers will be trying to attach to the same Web Application process w2ps.exe so creating separate Web Applications on different ports is a must unless you are prepared to share time debugging. How to setup a development environment for sharepoint 2013
The second problem is when you try to collaborate and use shared components/features. Having a desire to work separately is debatable, I believe that the team developers should be collaborating and sharing so combing work is desirable to ensure seamless integration into a single final solution and that no work is duplicated. The multi-developer single server environment works perfectly until you try to collaborate 'One common mistake is to have one “development server” used by all team developers. Unless team members are working on totally unrelated components and never need to do common things such as restart IIS or attach a debugger to an IIS process, this type of environment generally doesn’t work well.' http://technet.microsoft.com/en-us/magazine/dn145990.aspx We made this mistake through lack of experience and knowledge, but once you make it it's possible to work round it.
My first attempt to share features was to copy developer 1's project into developer 2's solution and add a reference to it in developer’s 2's project and add all the features to developer 2's package. Deploying this works fine for developer 2, until as I discovered if developer 1 detaches their solution from the debugger it retracts the solution based on the duplicated solution id from the farm and therefore from each developer's web application. Therefore developer 2 has the rug pulled out from underneath them. Although this is a part solution and seemed to work for a while, it took me a while to work out what was happening and what combinations of dev 1 and 2 deployments make each other’s work and not work.
So I found a better solution. Under the project properties in Visual Studio under SharePoint tab there is a combo box called 'Auto-retract after debugging'. This by default retracts the solution when the developer stops the attached debugger and pulls the features out from underneath the other developers. Unticking this box prevents the retract and leaves each developers individual solutions deployed at farm level and on reattaching to the debugger just replaces the solution with minimal fuss.
In my experience recycling the IIS application pool is so fast other developers don't even notice, but with a larger team than 2 this might become more prevalent, so perhaps someone else could add their experiences. I also guess unless the other develops try to attach at exactly the same time that the recycle is happening it'll be fine, so is a really small chance of having a cross over time, and simply detaching and reattaching will fix this if it is ever experienced.