Kentico v9 - Can't edit form after migration to staging server - kentico

My dev server seems fine, i can access all the sub levels with the form (Recorded Data, General, etc.). My staging site only has Alternative Forms and Versions.
I thought something may have gone wrong with the migration and manual sync tasks, but I attempted to create a new form in staging, the result is the same.
I went back to the documentation to see if I missed a step when I attempted to create a new form, but nothing is mentioned. Is there a site or setting flag i missed?

There could be a multitude of issues which are causing this but first things I'd check are:
Check your Kentico event log for errors/warnings
Check the server event log for errors/warnings
Ensure all your files are copied from dev to staging server(s)
When you moved from dev to staging, did you resign your macros?
Did you clear your cache on the staging server and in your browser(s)
I'm guessing it is an issue with cache but the others should also be reviewed, especially resigning the macros.

Related

Using Headless Domino Designer to create NSF on a Domino Server

This wiki (https://www-10.lotus.com/ldd/ddwiki.nsf/dx/Headless_Designer_Wiki) seemed to indicate that you can only create NSF under your Notes Data directory. I have done a couple of quick test and the only workaround I can find is to install Domino Designer on the same server as the target Domino server and set the target as the Domino data folder (i.e: C:\Domino\Data\sample.nsf instead of just sample.nsf).
The reason for this is I am trying to find an automated way of the following operation
Import ODP into workspace
Associate with a new NSF, but choose a Domino Server as a target
Does anyone have other workaround for this ?
I wish I had a more complete answer for you, but as this is still unanswered after a few days, I'll try to add some insight. It sounds like you have some experience getting headless DDE builds to work, so I won't focus on that. If you're looking for my take on headless DDE builds, I blogged on the subject a while ago, but since adapted the Jenkins CI based process I outlined there for a GitLab CI runner based solution, which I described in another SO answer.
Firstly, I would strongly recommend against setting your Designer target as the same as a server instance. This might work, but seems an unnecessary complication, and potentially issue prone, IMO.
My interpretation of your steps:
automatically receive updates (e.g.- on master branch, or all commits, etc.)
perform build via headless DDE
deploy built NSF
Splitting apart the logic for deploying of the built NSF is ideal here, since you have an asset that needs to be parked in a server path. The two main approaches I see are either:
having a dev/staging server that you can programmatically restart on demand
a more complex mechanism, in an NSF or server plugin, that will ingest the NSF's design and replace the design elements in a (newly created) destination NSF
As you can imagine, that last one is a bit tricky, but it was something I've left off working on, until I have more "free time". As for the former, you'll likely want someone with a bit of admin/operations skills set assist you, but in my mind there would be a total of three scripts involved:
one to down the destination server (this is why it should be a dev/staging server)
one to copy the built NSF to the destination file system path
one to start up the destination server
If you have a design task set to run at a certain interval and point the staging server for any changes, you could conceivable pull from that at whatever your interval is; nightly, etc. I hope the perspective helps.

Using Deployment Slots, I mixed up my Connection Strings, how can I get it to work?

I am half expecting the answer to be delete everything and start over, but I figured I'd come here first after not figuring it out / finding an answer.
I created a Web App, and two deployment slots (staging / development). I created two DB's (DBName, DBName_Development). I forgot to tick off "Slot Setting" on the Development slot, and when I swapped it to Staging the configuration setting swapped as well. No problem, I figured, I will just put in the correct configuration setting and then tick off all the "Slot Setting" boxes so this doesn't occur again.
However, even after doing that, it appears my Staging site is still looking for the old DBName_Development database. Since it was changed it's not working. Not sure if I can even access the web.config for the staging site.
So, do I just trash everything and start over? Or am I missing some setting somewhere in the blades of Azure? I tried Restarting the Web App to no avail.
Thank you in advance for any suggestions/guidance/help.
You may start from Kudu console - it is the nice tooling for getting different things done with your web app.
So, if you suspect that some of your websites use something wrong or things mixed, you may go to the http://webappname1.scm.azurewebsites.net (note .scm.) and http://webappname2.scm.azurewebsites.net and compare the needed settings. If you will see that there is nothing wrong (or, vice versa, something is wrong), then you may proceed to the debug console and check the state of your web.config. And replace it if you will see that it will be needed.

How to move a document from a pre-production to production instance in Kentico 7

We recently migrated a bunch of document updates up from a pre-production server to our production server. We'd attempted to use content staging, which had worked mostly OK in the past, but this time it failed with a lot of parent records not found errors. Our outsourced developer used the Documents tab of the Staging module to sync subtrees across. However a few files got missed, or didn't work correctly the first time. So I'm trying to move them now, and I'm running into a problem.
After expanding the content tree and clicking on the document in the Documents tab, and selecting the correct target server (we've got bi-directional staging set up), we're getting an error: Dependent task 'Move document See Things Differently' failed:Synchronization server error: Exception occurred: SyncHelper.ServerError: Document node not found, please synchronize the document node first.
Looking at the tasks listed, I don't even see a Move document task anywhere queued up for the target server.
Is there any way I can move this document up to our production instance? I've looked at the site export as an alternative, but it doesn't look like I can export just this one page. Am I going to have to recreate the page on Production instead?
The best way is to attempt this sync is to clear out all the staging tasks and do a full sync from the root of the website. Most likely what happened to some of the documents which are stating "moved..." is the pages were reordered. Which means every document below that document's parent will be updated on that level. So simply moving or reordering one document out of 10, will trigger 10 staging tasks. If you don't sync those to the production site, the order will be off according to the staging site.
I have had a problems similar to this before.
This typically works:
Create a copy of the document, put it in the same location in the
content tree.
Delete the original document.
Make any changes to the new document name, URL, aliases, etc (remove the '1' for example)
Then push that new document with Kentico Staging.
Its a bit of a hack but sometimes necessary.
Brenden is right on target about clearing the staging tasks listed under "All tasks" before you try syncing again. We've run into these errors on our sites when we've tried pushing a large number of docs from staging to production. What worked for us was deleting all pending and failed "Pages" tasks, then under the content tree in "Pages," navigating to the first child level and syncing "Current page" all the way to the closest parent directory, and then syncing "Current subtree."
For instance, if the problem doc is in, say, the "18" dir, select Articles and sync current page, then 2016, then 01, and for 18 sync current subtree.
content tree syncing screenshot
The best way is to use Kentico in-built Staging module and use that to first move objects and then the pages.
I have never faced any problem moving a large number of nodes(around 8000). That's the best possible approach.
In case your website a large no of custom table items let's say 50K, then I would do an export/import of the table. Synching so many entries usually has given Connection Time Out error before.
Thanks,
Chetan

SharePoint 2010 modify web.config file with http handlers

I am deploying a SharePoint 2010 web part that uses the microsoft .net charting tool to build charts. I need the chart handler added to the sharepoint web.configs automatically. I've been told that when you create the wsp the package can be told that when the program is installed it needs to modify the web.config to add these handlers.
I have seen a couple options out there:
-WebConfigModifications
-Safe controls
I don't know which, if any, that I should be using. I don't know for sure if this will be a first time installation for the application (we're moving sharepoint environments at the same time we are updating this. I think that it will be a first time installation on that new environment but can't be sure.)
And I definitely do not know how to implement this correctly. I would appreciate any advice.
Also it may be important to know that I do not have any privileges on the server. I can't even deploy myself.
For example, this seems like good info: http://platinumdogs.me/2009/07/08/using-the-mschart-controls-in-sharepoint-moss-2007/ Except that I can't just write to the webconfig and restart IIS. It has to be automated and not a direct edit to the file.
Thanks all!
I would recommend that you use a Feature Receiver attached to your WSP to create the appropriate SPWebConfigModification entries when your solution's features are activated. Likewise, the SPWebConfigModification entries should be removed when your solution's features are deactivated.
Step 1: Create a Feature Receiver
MSDN has an overview of how to add a Feature Receiver: http://msdn.microsoft.com/en-us/library/ee231604.aspx
Note you'll want to handle both the FeatureActivated and FeatureDeactivating events.
Step 2: Use Feature Receiver events to add or remove SPWebConfigModifications
In those two events, you'll need to programmatically add or remove one or more SPWebConfigModification entries. These affect SharePoint's web.config file, but unlike a manual edit of the config file, they are stored in SharePoint's content database. This means that if the web.config is reset for any reason (and it happens), SharePoint can and will reapply the modifications, thus preserving your changes.
MSDN has an overview of programmatically creating and removing SPWebConfigModifications: http://msdn.microsoft.com/en-us/library/office/bb861909(v=office.14).aspx
It is very important that the FeatureDeactivating event properly clean up all modifications made during FeatureActivated, or you will end up with a proliferation of duplicate config entries. This means you need to really understand how to use the Path and Name properties of the SPWebConfigModification.
This article gives a good overview of how Path and Name are combined to create an XPath expression pointing to the node to be added or removed: http://smindreau.wordpress.com/2013/06/12/finally-the-way-to-add-web-config-modifications-to-sharepoint/
Step 3: Test, test, TEST!
Lastly, test activating and deactivating your solution's feature in your local development environment to make sure everything is working properly. Note that the modifications will be applied via a timer job, so you may need to wait a minute or two to see the changes show up. Be sure the feature deactivation cleans up your modifications! (If you get into a mess in your development environment with duplicate modifications, you can always wipe the slate clean with a little PowerShell action.)

How to do remote staging in liferay 6.1.1 GA2?

I have a site when I tried to apply local staging it's worked fine,but we I tried to connect it through remote server it's not working giving error connection can't be established.Does any one tried it?
This is the configuration with the error message:
This blog post (disclaimer: my own) explains how to do it with https - you can omit long parts of it if you don't want encryption. It also covers 6.0, but the general principle is still the same.
You want to pay special attention to the paragraph Allow access to webservices in that article and check if your publishing server (the "stage") has access to the live server. In general, if this is not on localhost, it requires configuration as mentioned in that article.
As you indicate that you can't connect to your production server from your staging server, please check by opening a browser, running on the staging server and connect it to the production server - go to http://production-server-name:8080/api/axis and validate that you can connect (note: You get the authoritative result for this test only when not accessing localhost as the production system: Do run the browser on the staging system!) - with this test you can eliminate the first chance of your remote system being disallowed. Once this succeeds, you'll need credentials for the production server to be entered on the staging server - the account that you use needs to have permissions to change all the data it needs to change when publishing content (and pages etc.)
The error message you give in the added screenshot can appear when the current user on staging does not have access to the production system (with the credentials used) - verify that you have the same user account that you are using on your staging system (the one that gets the error message from the screenshot) in your production system. Synchronize the passwords of the two.
I your comment you give the information that you're using different version for the staging and the production environment - I don't expect that to work, so this might be the root cause. Test with both systems at the same version.
A couple important points to keep in mind with remote publishing:
If you're not on LDAP (or you have different LDAPs for different environments), you should validate that your user account is exactly the same in both source and target environments. So, if you're on the QA site and you want to remote publish to production, your screen name, email address, and password should all be the same.
Email address is uber important. Depending on which distribution (version) of Liferay you are on, the remote publish code uses your email address to irrespective of whether or not you have portal-ext.properties configured to use screenname.
You should have the Administrator role in on both sides. It may not be required in every scenario, but giving that role out to users that do remote publishing has saved me time and effort debugging why someone's remote publish didn't work. Debugging this process takes a very long time.
If remote publishing is causing you problems (and it probably is or you wouldn't be here), try doing lar file exports / imports. This is important since remote publish failures are not exactly helpful in telling you what failed, they just tell you then failed. Surprisingly, there are often problems in the export process and you can sometimes pinpoint some bad documents or a funky development thing you did using Global scope and portlet preferences that caused your RP to fail. I generally use this order in this situation a) documents and media [exclude thumbnails or your lar file will likely double in size, also exclude ranks if you're not using them] from the wrench icon in the control panel b) web content from the wrench icon in the control panel c) public pages [include data > web content display, but remove all the other data check boxes], include permissions, include categories d) private pages [same options as public pages].
If you already have Administrator role and it's saying you don't have permissions to RP to the remote site, setup your user on the target environment with the "Site Administrator" or "Site Owner" role.
A little late for first and foremost, but anytime you have something that's not working (remote publishing or otherwise), check the logs before you do anything else. The Liferay code base doesn't include a lot of helpful logging, but you do occasionally get a nugget of information that helps you piece together enough to do root cause analysis.
Cheers! HTH

Resources