Installshield allows to 2 types of project file types - XML & Binary, in a Basic MSI project.
What should discourage me from using XML project file type over Binary type?
I have 2 of those listed listed below, but are these correct?
Slowing opening and closing time on big project files
May slow down InstallShield compile/build process
build time is not affected.
Loading time is also not that different, I just checked with one of my projects who has about 6,000 files in it, and the time to open it seems almost identical (maybe 1-2 seconds slower on the XML project)
There are two potentially significant differences, depending on how you use your projects:
As a poorly-kept-secret, the Binary format is compatible with Windows Installer automation and editing tools, but the XML format is generally not.
However the XML format is more amenable to text-based viewing, editing, source control, or XML-based automation.
There are several additional differences I know about that I consider insigificant:
The representations differ, so you will see size differences in the resulting project files.
It is a conversion, as InstallShield works internally with the Binary format, so it takes some (minimal) extra time to load and save the project file.
The code that converts internally between XML and Binary representation is well-used, but it's still a conversion. That's additional complexity, so loading and saving as XML has more points of potential failure.
There are some slight behavioral differences after deleting a record "in the middle." Saving and loading through XML will normalize this and add a new record at the end; working in Binary may reuse the deleted record's spot for the next record. However record order is not semantically significant.
Related
I have a .NET 4.0 C# Solution with a single .csproj (Library) having several thousand files.
I want to extract out a small subset of the functionality from the thousands of files.
e.g. I want to extract the functionality of the MyLibrary.RelevantMethod() method into another library.
The aim is to create a new .csproj with the bare minimum class files needed to achieve this functionality.
i have a Program.cs which invokes the functionality and i can navigate through the flow to find all classes involved. Just that there are too many. (still a small subset of all classes)
Solutions tried:
the usual brute force of going through the flow from the method (F12) and copying over every class file and associated files needed for it to compile. this is taking a lot of time, but i know that if i keep at it, it'll be done. so that is what i am doing right now.
other option was to copy over the whole project and eliminate folders of classes based on instinct/name space references, build to verify and keep at it. this got nasty because a subset of classes in a folder were needed.
the vs 2013 code-map graphs became unmanageable in 3 drill downs. sequence diagrams became too complex as well.
Call hierarchy seemed to be the most promising showing all the classes involved visually but there is still the manual task of drilling through and copying the classes.
while i manually continue extracting the class one-by-one using the call hierarchy, is there a faster way or a more automated way (semi works as well) to determine all the classes involved in a method call in C#?
if i can get the list, i can do a search on the physical folders nesting the .cs. files (every class has an equivalent .cs file) and just copy them over.
You can find all classes involved in a method call with the Runtime Flow tool (developed by me). From the Runtime Summary window you can also copy these classes to the Clipboard for the selected module or a namespace.
See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)
I read many times that YUI will load modules dynamically based on need. or based on parent module. Like here it is written
The overlay module will pull in the widget, widget-stack,
widget-position, widget-position-align, widget-position-constrain and
widget-stdmod extensions it uses.
So, how can I determine the final size of data getting downloaded for a web page due to YUI usage.
Actually I was thinking how one can compare the datasize of YUI with that of another library (JQuery).
If you want file size vs functionality comparison, a close approximation based on features would be the simpleyui.js package (though the feature compilation is not 1:1), and be sure to look at gzip size as Tivac said.
Also keep in mind that JS lib file size comparison used as a reason to choose one over another is often a red herring. Your site likely includes a number of images, many of which will easy be larger than the lib and several additional modules. More relevant comparisons would be how the library is structured, what its relative strengths are, what's included out of the box (officially supported features vs 3rd party plugins), its community and documentation, etc. Pretty much any lib will serve your basic DHTML needs, and neither you or your users will notice the difference. Choose what works for you, and helps you build clean, maintainable code that you or your successor won't hate in a few months.
The Configurator will give you a file-size breakdown & automatically selects the needed modules.
We are branching out beyond the development team and trying to get other groups within my company to use version control for important documents that need change tracking. One frequent need is for Excel spreadsheets. These are large spreadsheets, modified fairly frequently (weekly or monthly) but with only a small portion of the cells changed each time.
Just sticking the files in subversion (the particular tool we are using) gives a history of changes and keeps old versions. And the TortoiseSVN client makes it easy for non-technical users. Recent versions of TortoiseSVN even contain a script which can be used to perform nice visual diffs between Excel documents.
My remaining concern is disk space. These are large documents. The diffs between versions are small, but I worry that the version control will notice that the file is binary and fall back to storing each version separately. Does anyone know of a solution to this? For instance, a format we could save in in which the diffs would be small so only differences would be saved, or a version control system which is specifically aware of Excel files? I have not yet done performance testing, but our version control server is already badly taxed and if there is a better solution I'd love to know what it is.
Currently SVN cannot efficently store those types of files. There has been some discussion about it though
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=651443
This SO question shows a graph when storing an OpenXML office document. The results were pretty linear
Will Subversion efficiently store OpenXML Office documents?
Although your question wasn't specifically about that format it may still apply. You might just need to run a test in SVN and see what kind of storage it takes. SVN is pretty good at storing binary files, so it might not be too terrible. The SO question above also mentions saving the file as a plain text XML 2003 document, which you might investigate also.
One consideration is using Team Foundation Server for source control (if that's an option), which will just store your delta changes, although it may be a bit heavy for what you're looking for.
From my understanding, binary vs. text doesn't have an impact on the storage size in SVN: http://help.collab.net/index.jsp?topic=/faq/svnbinary.html
I'm working out the reasonability of a request to keep all documents with executable code of a document management system. This is above and beyond the existing protections restricting the file extensions to a short list and running the file by norton antivirus before we save it.
So far .doc(x), .xls(x), and .htm are all common document types that I can't demand people to stop using and that can have executable code in them.
Does the technology exist to check common document types for the existance of executable code?
Unfortunately, this might be a losing game.
If you really want to completely restrict documents to ones that cannot contain executable code, you are probably better off by compiling a list of allowable document types, instead of deniable document types. There will always be new file formats with executable code, and even new versions of old formats where they've added executable code (such as PDF, as mentioned by Kevin).
The only way to make this safe would be to compile a list of allowable formats, and maintain that over time.
Note that security vulnerabilities in the viewer client program, such as buffer-overflow vulnerabilities can by abused to cause executable code in a fileformat that does not normally have such a feature.
pdf is one.