When using Jenkins, it helpfully auto-generates a colorful report filled with all of your results from testing. The issue I'm having, however, is accessing that report.
I have all of my configuration set up the same as all of my other nodes. (I'm using master-slave configuration on Jenkins, with 4 slave node VMs running Win 7 x64.) When I open up Group 5, however, and open the HTML report page, I pick up the below error.
As always, any help is appreciated.
While I apologize for not being able to share any specifics of the error (configuration, etc. I'm under an NDA for that type of thing), I was able to solve my problem.
If you have everything set up for nodes, and are copying files back and forth between the Master and Slave (there's a plugin that allows you to copy files into the workspace before building instead of using an SCM) just don't have the post-build action of "Publish HTML Reports'" option of "Keep past HTML reports" checked. Otherwise you'll end up with file conflicts and you'll have the same 404 status code that I picked up.
Related
We have a project with a single code base which we build both on Windows and Linux. And we want to run Klocwork code analysis on both Windows and Linux. Currently our approach is:
We have set up one KW project in the web UI
Inject and build on Linux, push the results to the server, save the report
Inject and build on Windows, push the results to the server, save the report
It somehow works, but the problem is that latter scan effectively overwrites results of the first one. If we save report directly after push, then we can still have a saved copy, but if developers want to triage/analyze the hit which is present only in the first build (i.e. some Linux-specific code), then it's almost impossible because KW has already marked this hit as "obsolete" (because it was not present in Windows scan)
Having two projects is not really an option, because 90% of the code is shared and it will cause huge overhead of developers to triage the same hits twice.
There are multiple options to achieve your goal.
Option 1: There is a tab under projects called Builds. That can give you the build chain report. here you can see the reports of previous builds.
Option 2: Are you using Klocwork desktop tools (Plugins/kwcheck)? If yes, developer will be notified automatically about the new defects/issues that he has produced at his machine. So there may be no question of reviewing Klocwork portal by a developer just to see what are all the issues he has created.
Option 3: I see you have mentioned 90% of the code has been shared. Is that meant, your project needs windows dlls and Linux libraries together to build your project?.
If the Answer is YES, please do let me know. i will think about some possible workarounds.
If the Answer is NO, then creating kwtables is one time job and from second time onwards Klocwork can perform incremental analysis (kwbuildproject ....... --incremental)
Option 4: Creating multiple project is not a bad option. Existing project settings can be replicated and the issue status can be sync. When you push the results to Klocwork server, the results will be pushed from Build Machine to Klocwork web/database server and it creates /projects_root/My_Project/builds/My_Build_Name/ directory. So, maintaining two Klocwork projects wont make much of a difference.
Option 5: Schedule a call with Klocwork support team. They will be happy to assist you with the best possible way.
I hope this helps.
This wiki (https://www-10.lotus.com/ldd/ddwiki.nsf/dx/Headless_Designer_Wiki) seemed to indicate that you can only create NSF under your Notes Data directory. I have done a couple of quick test and the only workaround I can find is to install Domino Designer on the same server as the target Domino server and set the target as the Domino data folder (i.e: C:\Domino\Data\sample.nsf instead of just sample.nsf).
The reason for this is I am trying to find an automated way of the following operation
Import ODP into workspace
Associate with a new NSF, but choose a Domino Server as a target
Does anyone have other workaround for this ?
I wish I had a more complete answer for you, but as this is still unanswered after a few days, I'll try to add some insight. It sounds like you have some experience getting headless DDE builds to work, so I won't focus on that. If you're looking for my take on headless DDE builds, I blogged on the subject a while ago, but since adapted the Jenkins CI based process I outlined there for a GitLab CI runner based solution, which I described in another SO answer.
Firstly, I would strongly recommend against setting your Designer target as the same as a server instance. This might work, but seems an unnecessary complication, and potentially issue prone, IMO.
My interpretation of your steps:
automatically receive updates (e.g.- on master branch, or all commits, etc.)
perform build via headless DDE
deploy built NSF
Splitting apart the logic for deploying of the built NSF is ideal here, since you have an asset that needs to be parked in a server path. The two main approaches I see are either:
having a dev/staging server that you can programmatically restart on demand
a more complex mechanism, in an NSF or server plugin, that will ingest the NSF's design and replace the design elements in a (newly created) destination NSF
As you can imagine, that last one is a bit tricky, but it was something I've left off working on, until I have more "free time". As for the former, you'll likely want someone with a bit of admin/operations skills set assist you, but in my mind there would be a total of three scripts involved:
one to down the destination server (this is why it should be a dev/staging server)
one to copy the built NSF to the destination file system path
one to start up the destination server
If you have a design task set to run at a certain interval and point the staging server for any changes, you could conceivable pull from that at whatever your interval is; nightly, etc. I hope the perspective helps.
I have a local replica. From domino designer, when I open any design element, I get this error "Invalid or nonexistent document".
But this error didn't occur on server copy. Now you can ask you don't you work directly on server copy. the point is, it's very large db and have serveral xpages and custom control, etc.,. so building the db on remote server copy is painful for me. so work on local copy, save, build, and replicate to server copy.
what I have tried so far.
deleted local replica and created new replica. still error
replaced with my latest template. still error
felt some design elements have corrupted. so replaced both server & local replica with blank template to remove all design elements. and again replaced with latest app template. still error.
ran load fixup on server copy and replicated in local. still error
do anybody have a clue on this issue or any workaround to resolve this?
Thanks in advance for your help.
This may sound strange but I have solved weird things before doing this.
Close Notes and designer. Load Task Manager and make sure all Notes Tasks are ended.
Try to following:
from the notesprogram folder run ncompact -c path\dbname
delete \notesprogram\data\Cache.ndk
delete \notesprogram\data\log.nsf
rename \notesprogram\data\workspace folder to workspace.sav
rename desktop8.ndk *.sav
rename bookmarks.nsf *.sav
Launch notes and designer then test. If it works then great. If not rename everything back. Note, renaming the above will lose your preferences etc.
I have a project I am going to begin co-developing on one of my web servers. Due to the nature of this kind of thing I'd like to have some version control going on. I've been searching all day for something that fits my needs and Bazaar seems the way to go, but I cannot figure out how to configure it.
My web host is Linux, without SSH (or SFTP as far as I can tell). I've read that you can use Bazaar in this situation to make a "dumb" server, but I can't for the life of me figure out how to configure, or find a guide. Everything out there requires either SSH/CLI access (both of which I don't have) or are too vague to follow. I am using the Windows GUI for Bazaar as well.
Can anyone either point me to a guide/instructions on how to do it, or post one here?
Edit Since Original Post
I have been trying to do several things since my original post. It might be that I am misunderstanding how bazaar is meant to work. What I want is to have my php files etc. on my web host (to which i do not have ssh access) so that myself and codevelopers can edit and test files without overwritting each other.
I initially tried to "start a new project" on my server via "ftp://user:pass#server" and it says that is successful. Then it prompts with a "Unable to open location" error saying "C:/ftp:/user:pass#server is not a brand, checkout, or repository.
Do you want to open it as a virtual repository, searching for nested locations?"
When I hit yes, it gives me an error "Unable to change to C:/ftp:/user:pass#server - closing page."
if I do the same thing with the "Open an existing location" option, it gives me the same error, except afterwards the Bazaar GUI hangs with "Not Responding" and needs to be killed.
Either way nothing is created that I can then interact with in Bazaar. If I create a local project and then push, it all seems to work. However, if I try to commit changes so I can push them I get an error "Bazaar has encountered an environmental error. Please report a bug if this is not the result of a local problem at https://bugs.launchpad.net/qbzr/+filebug including this traceback, and a description of what you were doing when the error occurred." the show details says "bzr: ERROR: Unable to determine your name.
Please, set your name with the 'whoami' command.
E.g. bzr whoami "Your Name ""
Before you can commit revisions, you need to set a name and email address. These are important metadata in a commit. You can set these in the Settings | Configuration | User Configuration menu. On the General tab enter the Name and E-mail fields. It's recommended to use real data in public projects, so that others who view your project can contact you in case they have questions. But it doesn't have to be real. This is a one-time initial setup.
As a next step, I would do a test to make sure you can really use your server over FTP, as a sanity check:
Commit a few revisions in your local repository, just so that you have something to push. It could be anything, doesn't matter.
Try a push to a URL in the format: ftp://user:pass#server/absolute/path/to/somewhere. In the example in your post you wrote ftp://user:pass#server, but it's important to have an absolute path there, like in this example.
If for some reason the push doesn't work well using the GUI, try it on the command line, for example:
bzr push ftp://user:pass#server/absolute/path/to/somewhere
This should really give an error message we can debug. In that case, paste the output into your question.
UPDATE
You said in comments that something was wrong with your name+email setting, and changing that resolved the problem. It would be nice to know what exactly was the problem there.
About bzr push to an FTP server, I double checked, this will never create the files on the server. From bzr push -h:
The target branch will not have its working tree populated because this
is both expensive, and is not supported on remote file systems.
Some smart servers or protocols may put the working tree in place in the future.
Over FTP it's a "dumb" server, so it definitely won't put the files there, only the .bzr directory, which is the repository and branch data. If you want to have the files there, I'm afraid you have to copy manually. There is a related bzr-push-and-update plugin, but it requires ssh access, which is not your case.
I have been looking at an issue for a week straight and have been unable to figure it out and I am desperate for the fix.
On a client site, we have two environments: UAT and PROD. UAT works perfect (Please keep this in mind). We are now trying to deploy the solution to PROD but certain parts of the solution are not working.
We have developed an asp.net application that we provide to clients to allow them to invoke SSIS packages (there are a couple of drop downs that they first select then click a button named "invoke").
When the user clicks the Invoke button, a batch file named InvokeSSIS.bat is called that assembles a command line call to dtexec with the appropriate parameters.
I'm having a problem with a particular package that is responsible for calling an executable which generates a spreadsheet that i will be importing into my system.
The executable is on an mapped H:\ drive.
I have modified the InvokeSSIS.bat batch file to capture the command the batch file is generating. If I execute this command from the command line, it works perfectly. From the webapp Invoker, it executes the package but the tasks responsible for calling the executable doesn't execute as the entire package takes only 1 second to complete (whereas it should take about a minute.)
The executable DOES have a GUI, but it is NOT interactive. This is because when you call the GUI with specific parameters, it automatically runs in batch mode and executes a macro used to generate the desired spreadsheet.
I know this is ok because it works on the UAT server AND it works from the command line!
I have checked the permissions on the executable (bu right-clicking the executable and clicking properties.) I have granted Full Control on the executable to the same user specified as the identity tab of the application pool i am using.
Can someone please help me? As I said I am dying over here!
Please let me know if you have any ideas or what other info you need.
Environment (both UAT and PROD)
OS: Windows Server 2003
IIS 6
asp.net 2.0
SQL Server 2008
Thanks!
Steve
You can't use a mapped drive with IIS.
You must use the \\servername syntax to reach files on other systems.
I agree with user544284 that this is at least in part a mapping issue. I'll ignore for a minute the complete insanity of having a web application call a batch file to start an executable that's on a remote network drive through a drive letter mapping.
Most likely the UAT box has something set up that maps that drive letter for you which Prod is missing.
The only other possibility is a security violation is occurring. Running .exe's from a network drive is generally frowned on. Do the two environments have the exact same version of windows? Are they configured the same with regards to UAC? Any differences here are going to be important.
Which brings up an interesting thought. I wonder if someone logged in to the UAT server using the same account credentials the app pool is using and added the ip address of the machine where the exe lives to the list of "Local Intranet" sites... Or, if they installed SSIS on the UAT server itself.
Just because YOU can log in to the server and run it on the command line means nothing. You have to find out if the drive letter is mapped at all for the user that the web app is running under and whether that user has the required security bits and whether the local OS will allow it regardless.
Okay, I can't ignore it: hairbrained is the nicest adjective I can come up with for this "architecture". Do yourself a favor and go back to the drawing board on this one. It has the word "brittle" written all over it, as you have already found. Instead of building out a batch file to call dtexec, just do it directly either by something like this or this.