Is there any tool replacement for SONAR for .net code quality and generate report from it? - c#-4.0

I have a Visual studio solution, which is designed using c# 4.0 .
I want to check the code quality for my solution and generate report out of it.
I tried the FxCop and i also got the report but i need the report something like this(from the image).
The rules compliance is 85% but in FxCop it only showed me the critical, error, etc.
I was not able to even deploy my project into SONAR because I had some timeout issue
coming for one of my project in the solution.
please someone help me.
Thanks in advance.
Regards,
Roopini

I don't know if there's an equivalent of SonarQube for .NET projects, but if you really want such reporting (which I can understand, obviously!), you should rather ask questions on how to resolve your installation issue for SonarQube instead of searching for something else. There are plenty of organizations where big .NET solutions are successfully analyzed with SonarQube and the C# plugins, so there's no reason why it can't work for you!
You can find useful material on the net to help you on this. For instance, a blog post written by John M Wright about "setting up SonarQube for C# projects". John periodically updates his post, so the information should still be very relevant.

Have you tried the tool NDepend? It generates interactive reports about .NET code quality and code rules compliance. Here are some sample reports.
NDepend is also a tool integrated in Visual Studio (2017, 2015, 2013, 2012, 2010) that proposes a range of interactive features (graph, dependency matrix, code metrics visualization, code diff...). Another point about NDepend is that code rules are actually C# LINQ queries, so it is pretty easy to customize a default code rule or create your own code rules.
NDepend also integrates in VS Team Services and you'll get all code quality data from your VSTS UI instead of being redirected to a server.
I read that you have time-out problems analyzing your code base, maybe it is because your code base is pretty large. NDepend is optimized and it can analyze a very large code base and create a report in a few dozens of seconds (it takes around a minute to analyze the whole .NET Fx).
A 14 days full featured trial is available.
Disclaimer: I work in the NDepend team

If you haven't already, I would suggest taking a look at my blog post on setting up SonarQube for C# projects: http://www.wrightfully.com/setting-up-sonar-analysis-for-c-projects/
The key to fixing your issue will be determining what the system is doing when the timeout occurs. Take a look at your log files and see what the last lines were before it timed out. It could be that your code is complex and just needs more time, in which case you can adjust the timeout values for whichever tool is running at the time.
Otherwise, I would suggest running whichever analysis tool (fxcop, gendarme, sytlecop, etc) was running when the timeout occurred outside of SonarQube. That is, run the tool directly from the commandline to see if it still times out or provides any additional information on the console.
Also, assuming you're using the sonar-runner tool to execute the SonarQube analysis, you can add the -X argument to the commandline, which will run it with debug-level logging enabled. This will create a LOT more log messages which may shed some additional light on the issue.

Related

siebel compile using vba macros?

We got one assignment to compile selected siebel objects using VBA macros.
When i say selected it means list of objects will be available in one excel sheet.
is it possible to compile automatically in VBA?
any help will be appreciated. Thanks in advance.
I can help you with this.
NO.
You can double check with Oracle support.
As #Ranjith already mentioned, there is no supported API to compile an SRF. This applies to both the VBA COM and the Java Bean.
Even if you managed to find an undocumented way of compiling the SRF using VBA, it would be unsupported by Oracle. Any issue you have afterwards they will request you reproduce your issue with a standard compile. So, I'd also recommend not investing in this route.
For arguments sake I'll assume that there is a supported way for a moment. Even then I'd argue that Excel is the worst way to automate a compile and deployment of a SRF. It's a client application, it can't - or is difficult - to run on a command line and doesn't interface with proper Continuous Integration tools like Jenkins, Travis CI, Bamboo and the lot.
Building a CI/CD pipeline for Siebel from scratch is complex. Take your time to research the matter. Have a look at the commercial party support and if you do want to develop your own, find a good DevOps engineer and couple him with a strong Siebel Engineer with deployment experience.
As all previous commentators mentioned, this is a challenge, but still possible.
Matter the fact you can use scripting on the Siebel Tools Object Compiler service, which is triggered via siebdev.exe batch compile call. Messing around RepositoryName input parameter can give you the way to pass Excel file name into the service.
Incremental compilation could be performed, following these complex steps on the PreInvokeMethod hook:
Open a transaction, using EAI Transaction Service (may require some ddl libraries from the Windows Siebel Server distribution)
Create a new project (e.g. "__my_incremental_compilation__")
Find the desired repository objects and move them to your project
Pass project name to the ProjectsList parameter of the service's Inputs property set
Continue service call (wait for the end of the compilation)
Rollback the transaction
This worked well for me, when I got stuck with the same question.
Hope it helps you!

TFS Build Defintion in VS 2013

Guys can anyone will explain me the use of the parameter's like automated test, run settings , analyze test impact, (each of them) etc present under Test in process section of Build Definition in TFS for Visual studio 2013. I am a fresher and new to this technology so can anyone explain me this in detail.
This is a very general and basic question. And most is related to the concept.
Such as run settings.
Unit tests in Visual Studio can be configured by using a
*.runsettings file.
analyze test impact
Used to determine which tests should be run since a previous build, only for manually test on TFS2013, more details please refer this tutorial.
Because of the limitation of length, no more tautology here. If you are a totally fresher to test in TFS, you could take some time go through all the topics in this link Testing the application. Which include all the answers you want and can help you quickly,comprehensively understand knowledge in this area.

moving from log4cxx to log4net

I have a largish application that currently uses log4cxx as its logging system. However, these appears to be a dead project, and I cannot get it to work with Visual Studio 2013. As such, I am looking to move to log4net
Our project is a mixed C+/C# project, using .net 3.5, and the logging is pretty simple
What is the best way to handle this migration. Any particular problems that people would expect to see, any required changes to config files, etc.
Also, is there a simple tutorial on how to use log4net. Unless I'm misreading it, it appears to be a case of reading the source examples until you figure it out.

Running sample projects in MvvmCross v3 (Hot Tuna)

I'm trying to run sample projects (viz. BestSellers and Conference) that are present in MvvmCross v3 branch. I resolved the strong assembly reference issues successfully. However each time I run a sample project, I get System.TypeLoadException in MvxFullBinding and MvxValueConverterRegistryFiller classes.
Exception in MvxFullBinding class:
Exception in MvxValueConverterRegistryFiller class:
Is anybody able to run the sample projects successfully? How do I get around these exceptions?
It looks like you're running this as the 'Touch' projects from Visual Studio? In which case you are way ahead of what I've managed to achieve.
If that is correct, then I suspect that what you are seeing is that you have:
built proper PCLs built in VS/Windows against the portable reference assemblies
but these cannot be executed against the current MonoTouch/Xamarin.iOS runtime.
If you try, you may see issues like: iOS black screen and MissingMethodException: Method not found: 'System.Type.op_Equality'
For some more info see 'almost portable binaries' on http://slodge.blogspot.co.uk/2013/01/almost-portable-binaries.html
There is 'proper' PCL support currently being worked on within XamLabs - so I am hopeful that there may be a solution to this problem arriving in the Xamarin.Android Alpha channel any day now - but don't expect this to be painless initially.
Of course, I might be wrong on this - this really is new territory and I will be fascinated to hear/read about your adventures. If you want to try to find more detail, then it may help to try looking deeper into the exception details, and looking into the console log trace on your mac.
For these two particular exceptions, I can confirm that both samples...
... although that is when I'm working on my Mac.

Trying to run WebTest from console application

Does anyone know if it is possible to create a console application in c# that calls loads a webtest in a test project?
I have added a reference to the project but get stuck when trying to call the test from main(). I am using vs 2010 to do this.
Any ideas? Searched around but couldn't find anything on what I specifically was trying to do.
Cheers.
I don't know if you're willing to consider alternative solutions other than a Visual Studio Web Test. You could write your test using the free Telerik Testing Framework http://www.telerik.com/automated-testing-tools/free-testing-framework.aspx and it can be built into your own executable as demonstrated at http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/code-samples/general/test-as-application.aspx.
If you want to take advantage of the recorder and record your tests, there's also a way of executing the recorded test from another executable. http://www.telerik.com/automated-testing-tools/support/documentation/online-api-reference/html/overload_artoftest_webaii_design_execution_runhelper_test.htm,

Resources