When I try to generate a javadoc, using the menu command Project\Generate Javadoc, the following warnings and error are produced for my custom classes in XPages:
javadoc: warning - No source files for package net.focul.utilties
javadoc: warning - No source files for package net.focul.workflow
javadoc: error - No public or protected classes found to document.
The packages are in the WebContent/WEB-INF/src folder which is configured in the build path and are selectable in the Generate Javadoc wizard. The classes are public with public methods.
Javadocs are generated for all of the Xpage and Custom Control classes if I select these.
You're experiencing this behavior because javadoc doesn't understand the Designer VFS (Virtual File System). It assumes that your project consists of a bunch of separate files in some folder structure on your local hard drive, not self-contained inside a single NSF. On the whole, the Designer VFS successfully tricks Eclipse into believing it's interacting with local files by intercepting read/write requests for project resources and importing/exporting DXL or CD records, etc. But apparently they haven't applied this sleight of hand to javadoc as well.
The Java source files corresponding to each XPage and Custom Control are processed successfully because, ironically, they are never stored in the NSF. During every project build, Designer discards any of these it has already generated and re-creates them based on the current contents of the various .xsp files. It then compiles those Java files into .class files, which are stored as design notes inside the NSF. At runtime, it's these files that are extracted from the VFS and executed... the source code no longer matters at this point, so there's no reason to ever bother including the .java files in the NSF, so they're just kept on the hard drive. One indication of this behavior is that the folder is named "Local" when viewed in Package Explorer / Navigator.
If you're using the built in (as of 8.5.3) version control integration (see this article for a great explanation of how to use this feature), you can tweak the Build Path to include the copy of the src folder stored in the on-disk project as a "linked source folder". This causes javadoc to consider the duplicate copies valid source files, and therefore includes them in the generated documentation. On the downside, it also causes Designer to consider them valid source files, which causes compilation errors due to the duplication. So this approach is only viable if you only need to generate the documentation on an infrequent basis, and can therefore break the Build Path temporarily just to run javadoc, then revert to the usual settings.
An alternative is to actually maintain your custom Java code this way on an ongoing basis: instead of creating the folder in WEB-INF inside the NSF, just create a folder on your hard drive that stores the source, then include that location as a linked source folder indefinitely. That way Designer can still find the source, but so can javadoc. NOTE: if you go this route, then you definitely need to use SCM. Because your source code no longer lives inside the NSF, providing the convenient container we're used to for getting the source code to other developers and ensuring inclusion in whatever backup schedule you use, the only place your source code now lives is on your local hard drive. So make sure you're regularly committing those files to Git / Subversion / Mercurial, etc., or, at the very least, storing them on some file server that is backed up regularly and, if applicable, accessible to all other members of the project team.
When you expand the net.focul.utilties in Designer, you will see all the methods and properties. But when you click on on of the methods, you will see neo source code.
So this is where javadoc fails to generate the documentation. I guess that the author of the application has not provided you with the source code. If you have the source somewhere, you can attach this code and then javadoc will be able to generate the documentation.
I run into the same situation and I have found the most straightforward method is to export the source to an external folder and then use regular Eclipse to generate the JavaDoc. Not sure my process is any less of a hassle than Tim's suggestions but for me it just feels less risky than trying to deal with the VFS vagaries.
Related
I have to use wro4j runtime solution. However, the first request to the server for the processed css file is very slow.
For production mode, I would like wro4j to generate it's files at application startup, to avoid the first slow request.
Here is my scenario, in case someone would advice me on an alternative approach :
I have a maven project which is built once (say generic.war) but customized for each hosted client (client1.war, client2.war etc).
For each client the appearance of the app can be overriden at different levels.
So I have a generic maven project and then another routine that unpack the war (generic.war), customize it by simply overwriting desired files, and repack it for a specific client (ie : client1.war).
This approach of generating specific wars by overwriting files is already in place and used all the time.
But now I want to use wro4j with this system. The first idea is to do the above, overwriting .less files from the generic files and rely on the runtime wro4j to do the final processing in the specifics wars (client1.war, client2.war etC).
But I don't want the first request to hang, I want the groups already in cache for the first request.
I saw this post, but it's a bit old now and I didn't find how to apply the recommended solution (no example and the part on how to trigger the processing from the ServletContextListener is not clear to me).
Thanks in advance :)
In a biztalk project, why do some XSD files have a hidden xsd.cs and some do not? What are these files used for and why is it, modifying the XSD and rebuilding does nothing to modify the .cs files?
For example: I have an XSD which is used to map messages to a SQL Send/Receive Port and execute a stored procedure. If I change the stored procedure (say change, delete, add a parameter) and thus, change the xsd to match, these changes aren't reflected when I deploy the orchestration unless I delete the xsd.cs. I CAN see the modified xsd in the Schemas tab of the BizTalk Administrative Console. I can see it is modified, yet I will still receive a message routing / mapping error unless that .cs is deleted and the orchestration redeployed. And by the way, after deleting, it never seems to regenerate though it also does not cause any issues.
Every xsd in your solution should have a .cs file. If you aren't getting them then there is something wrong with your solution. They are the compiled version of the schema that get deployed to the GAC. If the .cs file is not being changed after you recompile then you again have an issue. Check to see if you've accidentally checked the .cs files into source control and that they are now read-only (they should not be checked in).
When you modify the schema you need to both update the version of the schema in the BizTalk database and in the GAC. If you don't you will get some strange results. Using the Deploy option from Visual Studio will do this for you automatically, but if you are manually deploying you will have to ensure that it is both imported and GACed.
.cs files are generated for every BizTalk Artifact, that's how them become a .Net Type.
All of this should be handled automatically by Visual Studio. If you are having problems that only deleting .cs files will solve, then there's something wrong with your VS setup.
Note, the .cs files should not be in source control. If they are, remove them.
However, the scenario you describe doesn't make sense. What you see in BT admin is from the .cs file.
Using TFS 2013 It is a simple matter to generate debug symbols as part of the build process by entering a location into the ‘Path to publish symbols’ field of the build definition. Unfortunately I can’t use any of the TFS build environment variables to specify the drop location for the symbols in the ‘Path to publish symbols’ field because symbol publishing takes place after the build is done and those variables are apparently no longer in scope. So I specified a Debug folder in a fixed location and was going to move it to the desired location with the PostBuild script. Even that does not work because the symbols are not yet present when the postbuild script runs. The order of events is (roughly):
1. Run prebuild script
2. Build
3. Run postbuild script
4. Tests
5. Generate symbols
It looks like this is typically accomplished with yet another server… a Symbols Server. Is that what everyone does?
I notice that the information to determine the proper location to save the files (for me anyway) can easily be found using information in ..\000Admin\server.txt. Using that info I could have the postbuild script wait (say… up to an hour) for the symbols to appear (they should be there in a minute). Then move the Debug folder from the fixed location to the proper location. Is there a better way?
Thanks.
The symbol server / symbol share is a separate thing from the drop location. It's structured in a specific way the Debugger understands and allows one to debug an application without having to ship the .pdb files with the application.
Since you may want to provide other parties access to your symbol server (similar to how Microsoft allows access to their symbol servers for most of the .NET framework), then you can simply tell them the location and optionally the credentials needed to access them.
The symbol share is not really meant for human consumption, it's all built up with GUIDs and hashes so that the debugger can find its way around easily and quickly. It's also structured so that multiple versions of the same symbol are stored side-by-side.
Especially that last part, storing different assemblies and different versions side by side in the same location, is why you should not try to inject project names or versions into the symbol share location. That's for the debugger to figure out.
Just to be clear, it doesn't have to live on a different server, the only thing required is that you enter a path to a share, it can even be a sub-folder of that share. so sometimes you see configurations like:
\\tfs\symbols\
\\tfs\builds\
Or
\\tfs\artifacts\symbols
\\tfs\artifacts\drops
But indeed, you could drop your symbols to a completely different server altogether:
\\tfs\builds
\\corporate\symbols
Or you could configure multiple distinct computer names for one system (or use multiple DNS records) and actually have the same server listen to:
\\tfs-symbols\share
\\tfs-builds\share
Or even register the shares at the Active Directory level, allowing you to just use
\tfs-symbols\
\tfs-builds\
What you choose is all up to you, but make sure that the two paths of symbols and builds are eventually unique.
We have a Libraries folder where we keep third-party DLLs and our own utility DLLs for all applications to reference. I want to do development against one of our utility DLLs and an application that consumes it at the same time. But if I check out the library DLL to change it for temporary local use, TFS insists on checking it out exclusively, which trips other people up. I understand the reasoning behind it doing that (hard/impossible to merge a DLL, so two people shouldn't be working on one at the same time), but I just want to mess with my local copy while I'm working on the library it represents.
I suppose I could delete my application's reference to the DLL and recreate the reference pointing to some other place, but of course this just begs for me to forget and check it in like that, which would obviously be bad. Not to mention that this is a pain in the neck.
How should I proceed in such a situation?
You are using a server workspace that does not allow editing outwith TFS. In TFS 2012 local workspaces were introduced which do not have a read only flag for files and you are free to edit at will.
You can change your existing workspace in a few clicks: http://msdn.microsoft.com/en-us/library/bb892960.aspx
You could just go into the file system and mark the file as writeable. Once you are happy the binary is good you could check it out, copy the new version of the file over and check it back in again. TFS marks binary files like this as locked for good reason, as you can't merge them in the way you can with textual content.
The best approach would be to use a NuGet repository to manage your binary dependencies, instead of relying on binaries checked into source control.
How does Visual Studio process the App_Code folder when a change is made or detected in it? Not IIS or ASP.NET.
I want to gain a better understanding of why Visual Studio freezes for long periods of time whenever I save a code file inside a large App_Code folder of a website project. Alternatively, I could ask: why does Visual Studio not exhibit these same freezes when processing a file inside a class library that is equally large?
Ideally I would like to see official documentation cited from Microsoft of the issue at hand of processing the App_Code folder in Visual Studio and what happens that differs from processing a class library for example.
The App_Code folder is not explicitly marked as containing files
written in any one programming language. Instead, the ASP.NET infers
which compiler to invoke for the App_Code folder based on the files it
contains. If the App_Code folder contains .vb files, ASP.NET uses the
Visual Basic compiler; if it contains .cs files, ASP.NET uses the C#
compiler, and so on.
If the App_Code folder contains only files where the programming
language is ambiguous, such as a .wsdl file, ASP.NET uses the default
compiler for Web applications, as established in the compilation
element of the application Web.config file or the machine-level
Web.config file. Compilers are named build providers and a build
provider is specified for each file extension in an extension
element.
See the documentation here.
It recompiles all code in this folder in a separate assembly, then reference this assembly in your project.
You should be aware that a double reference could occur if you include these files as compilable in your project. In this latter case, the files are at the same time compiles in a separate assembly (with a temp name) which is referenced, and also compiled in the bin folder. This is the start of the horror show ...
These performance notes about the App_Code folder are slightly dated but likely still apply to the project type:
2) Keep the number of files in your /app_code directory small. If you
end up having a lot of class files within this directory, I'd
recommend you instead add a separate class library project to your VS
solution and move these classes within that instead since class
library projects compile faster than compiling classes in the
/app_code directory. This isn't usually an issue if you just have a
small number of files in /app_code, but if you have lots of
directories or dozens of files you will be able to get speed
improvements by moving these files into a separate class library
project and then reference that project from your web-site instead.
One other thing to be aware of is that whenever you switch from source
to design-view within the VS HTML designer, the designer causes the
/app_code directory to be compiled before the designer surface loads.
The reason for this is so that you can host controls defined within
/app_code in the designer. If you don't have an /app_code directory,
or only have a few files defined within it, the page designer will be
able to load much quicker (since it doesn't need to perform a big
compilation first).
-- http://weblogs.asp.net/scottgu/archive/2006/09/22/Tip_2F00_Trick_3A00_-Optimizing-ASP.NET-2.0-Web-Project-Build-Performance-with-VS-2005.aspx