This wiki (https://www-10.lotus.com/ldd/ddwiki.nsf/dx/Headless_Designer_Wiki) seemed to indicate that you can only create NSF under your Notes Data directory. I have done a couple of quick test and the only workaround I can find is to install Domino Designer on the same server as the target Domino server and set the target as the Domino data folder (i.e: C:\Domino\Data\sample.nsf instead of just sample.nsf).
The reason for this is I am trying to find an automated way of the following operation
Import ODP into workspace
Associate with a new NSF, but choose a Domino Server as a target
Does anyone have other workaround for this ?
I wish I had a more complete answer for you, but as this is still unanswered after a few days, I'll try to add some insight. It sounds like you have some experience getting headless DDE builds to work, so I won't focus on that. If you're looking for my take on headless DDE builds, I blogged on the subject a while ago, but since adapted the Jenkins CI based process I outlined there for a GitLab CI runner based solution, which I described in another SO answer.
Firstly, I would strongly recommend against setting your Designer target as the same as a server instance. This might work, but seems an unnecessary complication, and potentially issue prone, IMO.
My interpretation of your steps:
automatically receive updates (e.g.- on master branch, or all commits, etc.)
perform build via headless DDE
deploy built NSF
Splitting apart the logic for deploying of the built NSF is ideal here, since you have an asset that needs to be parked in a server path. The two main approaches I see are either:
having a dev/staging server that you can programmatically restart on demand
a more complex mechanism, in an NSF or server plugin, that will ingest the NSF's design and replace the design elements in a (newly created) destination NSF
As you can imagine, that last one is a bit tricky, but it was something I've left off working on, until I have more "free time". As for the former, you'll likely want someone with a bit of admin/operations skills set assist you, but in my mind there would be a total of three scripts involved:
one to down the destination server (this is why it should be a dev/staging server)
one to copy the built NSF to the destination file system path
one to start up the destination server
If you have a design task set to run at a certain interval and point the staging server for any changes, you could conceivable pull from that at whatever your interval is; nightly, etc. I hope the perspective helps.
Related
We are maintaining code for one of our clients.
Initially, we copied all the source code that they have and added it to our TFS 2012.
We modify the code any time they need a bug fix and give the client deployment packages.
Now, client wants all the latest code in their TFS 2012 as well.
Is there a way to update their source code with our changes? ...
preferably automatically (i.e. power shell script) and preferably with history of changes.
There are many approaches each with some pros and cons. The following are the main options I would suggest.
Database backup and restore
This is the only path that guarantees full fidelity. It has some technical difficulties (e.g. SQL Server version and editions) and political (how much information you care to expose, how much effort you want to put in sanitizing your data).
Project synchronization
There are some tools, most notably the Integration Platform, that use the API to read and reply the changes from one system to the other. It requires that the syncing tool can see both systems via HTTP(S).
It gives you the flexibility to project only some data (say source code not work items).
Keep in mind that you will always loose something in the process: the Changeset number will never match, some users details.
Dumb dump
Give up conserving full history and be content to share the code.
This is the simplest to implement: get all the code, ship and check into the other system. You can associate release notes in the check-in.
Two simple scripts using TF.exe is all you need.
You can use TFS Integration Tool to achieve the code migration(TFS-to-TFS). TFS Integration Tool moving data between two different servers. The migration is done through the APIs of TFS, and there also some limitations.(Check the above link for more info)
Detail steps please see my answer in this question: Move Team Project to another Project Collection TFS 2013
I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.
I'm wondering if someone can enlighted me a little bit on the Xpages build process and how this works with other replica copies of a database. Much of the advice I've seen posted regarding working with the the Domino Designer indicates (logically), that you'll have much faster response working on local copies and then replicating those to the server.
I'll usually save my changes locally, build manually, and replicate to the server, and most of the time, that seems to work fine. However, on some occasions, I've found that when I view the work I've done in the browser on the server copy, it hasn't seemed to update... in fact in a couple of scary incidents, it displays a version from several weeks ago (where is it even getting that from??). This isn't a browser caching issue, and I've opened the design elements (xpages, custom controls) on the server copy and verified that the changes ARE there. I end up having to perform a Clean on the server copy (not just a build) of the application, and then it displays as expected.
This seems like a foolish question, but you shouldn't have to perform a build on each replica copy correct? Any thoughts as to what might be an issue here? There is another developer involved, and he works directly on the server as he's in the same location, but we are rarely working at the same time, and never on the same elements. We are not using source control at this time.
We have seen similar behavior ourselves.
In our case, we do development on a server, clean / build project and then copy that database as a template to a deployment server. From there, we update design in the production database.
We have noticed that build process sometimes fails, especially when working over slower links. So we always repeat clean / build / refresh process a couple of times and we try to do it while in office with fast connection between the work stations and the server.
We haven't experienced build problems lately, so this repeating of build process obviously helps.
We have also seen that replicating design between local and server copies sometimes causes build related problems, which could explain the problems you are seeing. We have stopped using replication because of that and are now always working on the server copy directly.
I don't think that your not-using of source control software has anything to do with it.
I usually do all changes inside local template, then perform "Project \ Clean", then update design in server database. It works in 99% of cases. If not, I perform "Project \ Clean" once again. I hate this, but looks like it's the only way to get consistent code on production.
I'm working on a website with some other people. Usually when we want to modify something, we do the change on our machine and just upload the new version with ftp, hope it'll works (or that nobody will notice it doesn't the time we correct it) and that's it.
It's already not the best way to work alone but even less to work collaboratively so I'm asking advices.
I think that a solution like svn/git/mercurial could help me. I found bitbucket which allows free private repository with mercurial. But still after, how can I upload the changes I did to the ftp and make sure the version I've on my computer is the same than the one on the server.
We are all doing it during our free time (not paid) and some people comes and leave every year so I'm looking for something free, easy to use (explain to everyone why we should use a DVCS is already hard) and which doesn't rely on a specific person.
The server we are using to host the website is a cheap one and doesn't allow the use of ssh, svn,...
Thank you
Version control will not help with the issue you are describing - namely, uploading untested changes to a production site.
What you (and your team) need, is better quality control procedures - you need a test website and a tester (QA) person. The process would be:
Make a change
Update the test website
Have the update and the whole website signed off by QA
Update the production/live site
What you will gain by using version control (CVS, SVN, Git or anything else) is recoverability - you will be able to go back to a version before any breaking change. It will still not solve the issue of "the new code broke the site".
You want scheduled releases.
Commit and update code regularly
Code freeze or develop in a branch and merge to the trunk
test on a staging environment
Find a bug goto step 1
Release
You need to understand that what represents your latest correct working build is not what's on the server but in your source repository whether that be SVN or just the file system. Anything as long as it isn't the live server! Make sure everything works locally as expected then unless the site is huge (I guess not given your situation) deploy it in its entirety as a single version.
I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)