Publishing Debug Symbols in TFS Build - visual-studio-2012

Using TFS 2013 It is a simple matter to generate debug symbols as part of the build process by entering a location into the ‘Path to publish symbols’ field of the build definition. Unfortunately I can’t use any of the TFS build environment variables to specify the drop location for the symbols in the ‘Path to publish symbols’ field because symbol publishing takes place after the build is done and those variables are apparently no longer in scope. So I specified a Debug folder in a fixed location and was going to move it to the desired location with the PostBuild script. Even that does not work because the symbols are not yet present when the postbuild script runs. The order of events is (roughly):
1. Run prebuild script
2. Build
3. Run postbuild script
4. Tests
5. Generate symbols
It looks like this is typically accomplished with yet another server… a Symbols Server. Is that what everyone does?
I notice that the information to determine the proper location to save the files (for me anyway) can easily be found using information in ..\000Admin\server.txt. Using that info I could have the postbuild script wait (say… up to an hour) for the symbols to appear (they should be there in a minute). Then move the Debug folder from the fixed location to the proper location. Is there a better way?
Thanks.

The symbol server / symbol share is a separate thing from the drop location. It's structured in a specific way the Debugger understands and allows one to debug an application without having to ship the .pdb files with the application.
Since you may want to provide other parties access to your symbol server (similar to how Microsoft allows access to their symbol servers for most of the .NET framework), then you can simply tell them the location and optionally the credentials needed to access them.
The symbol share is not really meant for human consumption, it's all built up with GUIDs and hashes so that the debugger can find its way around easily and quickly. It's also structured so that multiple versions of the same symbol are stored side-by-side.
Especially that last part, storing different assemblies and different versions side by side in the same location, is why you should not try to inject project names or versions into the symbol share location. That's for the debugger to figure out.
Just to be clear, it doesn't have to live on a different server, the only thing required is that you enter a path to a share, it can even be a sub-folder of that share. so sometimes you see configurations like:
\\tfs\symbols\
\\tfs\builds\
Or
\\tfs\artifacts\symbols
\\tfs\artifacts\drops
But indeed, you could drop your symbols to a completely different server altogether:
\\tfs\builds
\\corporate\symbols
Or you could configure multiple distinct computer names for one system (or use multiple DNS records) and actually have the same server listen to:
\\tfs-symbols\share
\\tfs-builds\share
Or even register the shares at the Active Directory level, allowing you to just use
\tfs-symbols\
\tfs-builds\
What you choose is all up to you, but make sure that the two paths of symbols and builds are eventually unique.

Related

How do you get the file referenced in the declaration of lotusscript that is part of the %INCLUDE

We have several applications that include a specific '.lss' file that was created years ago by a vendor that isn't on any of the Domino servers and no employee has a copy. Since the applications are working in our production environment all is good, but when we need to make a code change we get an error because we don't have that file in our installation path to be compiled.
Is there a way to get the contents of the file from an application in our production environment since that is a compiled version of the 'nsf'? Does the file also need to be on the server somewhere if the application is working in our production environment?
I don't recall the exact process that is involved (it's been over a decade since I did it), but some vendors that want to hide their code use LSS files. They compile it into script libraries and agents, and distribute only the compiled version. If this is the case for the code you're looking for, it's not going to be possible to recover the LSS file.

How should I go about using a temporarily changed copy of a DLL locally when it's been checked in to TFS?

We have a Libraries folder where we keep third-party DLLs and our own utility DLLs for all applications to reference. I want to do development against one of our utility DLLs and an application that consumes it at the same time. But if I check out the library DLL to change it for temporary local use, TFS insists on checking it out exclusively, which trips other people up. I understand the reasoning behind it doing that (hard/impossible to merge a DLL, so two people shouldn't be working on one at the same time), but I just want to mess with my local copy while I'm working on the library it represents.
I suppose I could delete my application's reference to the DLL and recreate the reference pointing to some other place, but of course this just begs for me to forget and check it in like that, which would obviously be bad. Not to mention that this is a pain in the neck.
How should I proceed in such a situation?
You are using a server workspace that does not allow editing outwith TFS. In TFS 2012 local workspaces were introduced which do not have a read only flag for files and you are free to edit at will.
You can change your existing workspace in a few clicks: http://msdn.microsoft.com/en-us/library/bb892960.aspx
You could just go into the file system and mark the file as writeable. Once you are happy the binary is good you could check it out, copy the new version of the file over and check it back in again. TFS marks binary files like this as locked for good reason, as you can't merge them in the way you can with textual content.
The best approach would be to use a NuGet repository to manage your binary dependencies, instead of relying on binaries checked into source control.

source code location for debugging multiple instance of an application

Hi have an application running separateley (1 instance for customer) in different folders, 1 per each customer.
Each customer is a separate user on my machine.
At the moment I have the source code in each of these folders where I rebuild the code per each instance. Would it be better if I do something like the following?
create a shared folder where I build the code
deploy the binary in each user folder.
allow permission for each user to access the source code in READ ONLY mode.
when it is time to debug, by using gdb in each user folder will allow to read the source code and debug will happen.
Do you think that this could be a better approach or there are better practice?
My only concern is that each user has the chance to read the source code, but since the user will not access directly his folder (it is in my control) this should not trouble me.
I am using CENTOS 6.4, SVN and G++/GDB.
in different folders
There are no "folders" on UNIX, they are called directories.
I rebuild the code per each instance
Why would you do that?
Is the code identical (it sounds like it is)? If so, build the application once. There is no reason at all to have multiple copies of the resulting binary, or the sources.
If you make the directory with sources and binaries world-readable, then every user will be able to debug it independently.

How do I Create a Movable Type 5 Development Environment?

I want to create a development environment running Movable Type 5.
To create a separate development environment, it is necessary to copy and paste to reflect production.
How would I go about building a good environment?
There are many ways to build a development environment, and an experienced Movable Type developer would need to know more about your goals in order to make a good recommendation.
All of the following guidance assumes that Movable Type has been installed and is ready to run
on the development server.
Here are a few basic tips:
Although some of the key configuration details for a Movable Type instance are kept in mt-config.cgi, there are website-level and blog-level settings that are of equal importance that are kept in the underlying database.
Since most Movable Type 5 instances use MySQL as the database backend, it's possible to dump the entire contents of the Movable Type database using the mysqldump utility or a more visual tool like the Export function of phpMyAdmin. This produces a large text file with MySQL CREATE TABLE and INSERT statements.
Once the database is dumped to a file, the file can be moved to another server, modified, and reconstituted. One of the tasks we commonly perform at that point is to go through the database using an editor, the UNIX sed command, or some similar process, and perform global searches and replacements for the URLs and file system paths that are embedded in the database dump.
This is necessary in many cases because your production website may be http://www.mysite.com/, but your development environment may be http://dev.mysite.com/ or even http://localhost/. Similarly, the file system paths in production may be /var/www/mysite/htdocs/... while development may be /opt/local/apache2/htdocs/mysite/....
Once changes of this nature are made and the modified file is saved, the database is reconstituted on the development server by using a UNIX shell command like:
cat mysite.sql | mysql -u mt_user -p mt_password
Or by importing the database into another copy of phpMyAdmin.
Once all of this is done, the mt-config.cgi file from production needs to be copied into the Movable Type working directory and rewritten so that several important elements are changed:
CGIPath
StaticWebPath
Database
DBUser
DBPassword
DBHost
These Movable Type Configuration Directives are discussed in the online documentation.
All of the non-database assets have to be copied from production to development. Things like files containing jpeg, png, and gif images, files that have been placed in the production file system either manually, or using the Asset Manager. There may be other files that need to be copied from production, depending on how you use Movable Type.
Once all of this is done, and you are able to login to the Movable Type development server successfully, you will probably want to the websites and blogs to ensure that all of the content has been copied into development.
I hope these instructions are somewhat helpful to people needing to setup a development environment. I would be happy to get comments or edits if anyone thinks I left out anything significant.
By saying that you need a development environment for Movable Type, what exactly do you need to develop?
If you are developing a plugin? or theme? a website? content?
It is possible to assign a different mt-config.cgi file to each virtual server, and working on different database, for the same installation.
If you are developing a plugin, you will want to use the PluginSwitch directive so the developed plugin won't be loaded on the real website.
http://www.movabletype.org/documentation/installation/managing-multiple-instances-of.html
Eslar, you may like to consider also this documentation resource:
http://www.movabletype.org/documentation/mt41/rsync.html
Alternatively, you may like to consider:
http://www.cis.upenn.edu/~bcpierce/unison/
If you go with the 'rsync' solution described in the movable type documentation, you may like to check also these configuration directives mentioned there:
http://www.movabletype.org/documentation/appendices/config-directives/rsyncoptions.html
http://www.movabletype.org/documentation/appendices/config-directives/synctarget.html

How can we protect ourselves from other third parties installing DLLs with the same names as some of ours into C:\WINDOWS?

Our product includes several DLLs built from open source into files with default names as delivered by the open source developers. We're careful to install the files in our own directories and we carefully manage the search path (only for our processes) to keep the loader happy.
Another developer -- a towering intellect -- decided it would be easier to install their own build of some of the same open source into C:\WINDOWS under the same default DLL filenames. Consequently, when we launch a process which depends on these open source DLLs, the system searches C:\WINDOWS before our directories and finds the DLLs installed by the other developer. And they are, of course, incompatible.
Ideas which have occurred to me so far:
rename all our DLLs to avoid the default names, which would only make
it less likely we would encounter collisions
load all our DLLs by full path so the loader captures their names into
RAM and doesn't search anywhere else the next time they are requested
For various reasons, neither of these options is palatable at the moment.
What else can we do to defend ourselves against the towering intellects of the world?
You've got only two options: deploy the DLL in the same directory as the EXE (that's where Windows looks first) or using manifests and deploy the DLL to the Windows side-by-side cache. I don't think the latter option is common in the Open Source world but it is the only real fix if you want to share DLLs between different apps.
To add to the already excellent answers, you have a couple more choices:
The preferred solution(s) to this problem, supported since Windows XP, is to turn your dll's into a win32 assembly (They don't have to be .NET but the documentation on creating win32 assemblies with strong names is appallingly light so its easy to get confused and think this is a .NET only technology).
An assembly is noting more complicated than a folder (With the name of the assembly) containing the dlls and a .manifest (With the name of the assembly) that contains an assemblyIdentiy element, and a number of file nodes for each dll in the assembly.
Assembly based searching works even when dlls are statically linked!
The easiest option is to create unversioned assemblies and store them in the same folder as your .exe files (Assuming all your exe's are in a single folder).
If the exe's are in different folders, then there are two ways to access shared assemblies:
You can store your assemblies in a private alternate location if you expect your application to be used on Windows 7 and higher. Create a app.exe.config file for each of your exe's, and point a probing privatePath element to a common folder where you are storing the assemblies.
If you are ok with requiring administrative access to perform installs, (via MSI's) then you can deal with the appallingly bad documentation (well, absent documentation) that deals with giving your assemblies a strong name, and then store the assembly in WinSxS.
If you can't, or do not want to bundle your dlls as assemblies then this page covers dll search order
Using functions like SetDllDirectory are only going to help for dlls loaded dynamically at runtime (via LoadLibrary).
Dll search order used to be:
Directory containing the process exe
Current directory
various windows folders
PATH
Which you could have used to your advantage - launch each exe, setting the "current" directory to the folder containing the OSS dlls.
With the advent of SafeDllSearchMode the search order now is:
Directory containing the process exe
various windows folders
Current directory
PATH
Meaning theres now less control than ever :( - It goes even faster to the "untrusted" c:\windows & System32 folders.
Again, if the initial dll is being loaded via LoadLibrary, and its the dependent dll's that are the problem, LoadLibraryEx with the LOAD_WITH_ALTERED_SEARCH_PATH flag will cause the following search order (Assuming you pass a full path to LoadLibraryEx) :-
Directory part of the Dll path passed to LoadLibraryEx
various windows folders
Current directory
PATH
The directory from which the application loaded is normally the first directory searched when you load a DLL. You can, however, use SetDllDirectory to get the "alternate search order". In this case, the directory you specify to SetDllDirectory gets searched first.
There is also a SafeDllSearchMode that affects this to a degree. Turning it on excludes the current directory from the search.
Maybe just compile them to a static library?
Why not?
Also, the current directory, where the exe is activated from is searched before c:\windows.

Resources