I have a .NET 4.0 C# Solution with a single .csproj (Library) having several thousand files.
I want to extract out a small subset of the functionality from the thousands of files.
e.g. I want to extract the functionality of the MyLibrary.RelevantMethod() method into another library.
The aim is to create a new .csproj with the bare minimum class files needed to achieve this functionality.
i have a Program.cs which invokes the functionality and i can navigate through the flow to find all classes involved. Just that there are too many. (still a small subset of all classes)
Solutions tried:
the usual brute force of going through the flow from the method (F12) and copying over every class file and associated files needed for it to compile. this is taking a lot of time, but i know that if i keep at it, it'll be done. so that is what i am doing right now.
other option was to copy over the whole project and eliminate folders of classes based on instinct/name space references, build to verify and keep at it. this got nasty because a subset of classes in a folder were needed.
the vs 2013 code-map graphs became unmanageable in 3 drill downs. sequence diagrams became too complex as well.
Call hierarchy seemed to be the most promising showing all the classes involved visually but there is still the manual task of drilling through and copying the classes.
while i manually continue extracting the class one-by-one using the call hierarchy, is there a faster way or a more automated way (semi works as well) to determine all the classes involved in a method call in C#?
if i can get the list, i can do a search on the physical folders nesting the .cs. files (every class has an equivalent .cs file) and just copy them over.
You can find all classes involved in a method call with the Runtime Flow tool (developed by me). From the Runtime Summary window you can also copy these classes to the Clipboard for the selected module or a namespace.
Related
It is 2017, and as far as I know, the way programmers organize their codes have not changed. We distribute our codes into files and organize them with a tree structure (nested directories and files). When codebase is huge, and the relations between classes/components are complex, this organization approach gives me the inefficient impression. With more files, either one directory has more files in it or the depth of directories increases. And since we handle the directories directly, navigation costs me time and effort without tools like search.
Figure: A complex UML from https://github.com/CMPUT301W15T09/Team9Project/wiki/UML
We can use CAD to design/draw complex things; mind map can be created in a similar manner. For these, we do not need to deal with file systems. Can't we have something similar and hide file system in a black box? Why the fundamental organization methods have not evolved for so long a time.
So I wonder, what's the advantages that keeps us from getting a new way? What's the inherit advantages of using file system to organize our codes.
Different on-disk representations of source-code have been tried (e.g. how Flash stores ActionScript inside binary .fla files) and they're generally unpopular. No-one likes proprietary file formats. It also means you can't use text-based source control systems like Git, which means you can't do a text-merge to resolve change conflicts.
We store source code in files in a tree structure (e.g. one OOP class or procedural module per file), with nested namespaces represented by nested directories because it's intuitive (and again, for better cohesion with source-control systems).
Some languages enforce this, like Java, for example, that requires the source file be named the same as the class it contains and be in the same directory name as its containing package. For other languages like C# and C++ it just makes sense - because otherwise it's confusing to someone who might be new to your codebase when they see class TurboEncabulator inside a file named PrefabulatedAmulite.cs.
Let's say I have 2 features, both use abc.dll, and both reference it from their respective current directories.
So the output will look like this :
Feature1
abc.dll
Feature2
abc.dll
I've created 2 components for this. In reality I have many features and many dll's that are shared, and my installer size is nearly 1GB.
What I am looking for is a smarter way to do this, using IS 2015 professional.
What I've looked at so far:
Merge modules: Not sure if this would work, also it means I need to maintain the merge modules manually should files be upgraded.
DuplicateFile, via direct editor, but this wouldn't work because there is no way to have this bound to a feature, only a component.
A hidden feature which would install the shared files to the target system, then a post script which would copy these files to their respective features, and delete the folder of this feature.
Is there a best practice method to implement what I need?
The most suitable approach in this case is, indeed, merge modules. I am not sure why the concern about maintaining them - you should have an automated build process that creates all merge modules and then builds your main installer with the newly created modules.
However, in my opinion, merge modules are a bit cumbersome to use if you have a lot of custom actions.
An alternative to merge modules - assuming you are using a Windows Installer project - is using small MSI packages which you "chain" to your main installer (you can chain multiple packages with different conditions and supply different properties). Here too, you should have a build process which builds all those small msi packages and then builds the main installer.
If you don't want to have this kind of 'sub-projects', then the option of a hidden feature with a post action is acceptable, I've seen it, and done it, a few times. Note that if you target Windows 7 or later, instead of physically copying the files and deleting them, you can use symbolic links (using the mklink command), which helps reduce the installation's foot print on the target system (and make patching easier - you replace the original file, and all its links are updated automatically).
See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)
I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.
Let's say we have multiple libraries (DLLs) whose features one wants to use in an application, and wants to use them as a single DLL.
Is it possible to merge the DLLs into a single one, with all the features packed into it? I am not looking at the option to write a wrapper.
EDIT:
I've revisited the problem. Now all I want to do is bring all the projects under one solution and get a single DLL as the output instead of each project having it's independant output. Is this possible?
You can't literally merge several compiled .dll files into one. Your best bet is to put all files into a single project and recompile as a single library. You will likely have conflicts you'll have to resolve manually.
If you really have several COM in-proc servers you will also have to merge the data that facilitates class factories and COM registration - you will have to do that manually.