"Invalid or nonexistent document" error occurs when opening design element in domino designer - xpages

I have a local replica. From domino designer, when I open any design element, I get this error "Invalid or nonexistent document".
But this error didn't occur on server copy. Now you can ask you don't you work directly on server copy. the point is, it's very large db and have serveral xpages and custom control, etc.,. so building the db on remote server copy is painful for me. so work on local copy, save, build, and replicate to server copy.
what I have tried so far.
deleted local replica and created new replica. still error
replaced with my latest template. still error
felt some design elements have corrupted. so replaced both server & local replica with blank template to remove all design elements. and again replaced with latest app template. still error.
ran load fixup on server copy and replicated in local. still error
do anybody have a clue on this issue or any workaround to resolve this?
Thanks in advance for your help.

This may sound strange but I have solved weird things before doing this.
Close Notes and designer. Load Task Manager and make sure all Notes Tasks are ended.
Try to following:
from the notesprogram folder run ncompact -c path\dbname
delete \notesprogram\data\Cache.ndk
delete \notesprogram\data\log.nsf
rename \notesprogram\data\workspace folder to workspace.sav
rename desktop8.ndk *.sav
rename bookmarks.nsf *.sav
Launch notes and designer then test. If it works then great. If not rename everything back. Note, renaming the above will lose your preferences etc.

Related

Excel TFS Add-In Crashes Connecting to TFS

We frequently use Excel to perform bulk updates of data in TFS. Up until very recently, the Team Foundation Add-In has worked very well. However, it has started failing in several ways:
It will connect to the server, but attempting to connect to any
project causes Excel to crash, producing a Watson report in the
Windows Application Event Log.
If I restart Excel, it reports that it is running into problems with
both the shim and the add-in, and offers to disable it. If I do not
disable it, I still can't connect to a project.
Eventually, the add-in refuses to load at all, until I use the
Options dialog to manually add the COM add-in back into the
application. Doing so produces the same results (Excel crashes when
attempting to load a project).
I have taken the following steps in an attempt to resolve the issue:
Removed and completely reinstalled Office.
Re-registered the add-in component.
Uninstalled and reinstalled Team Foundation Office Integration.
None of these have produced a fix to the issue.
Does anyone know how to resolve this issue?
P.S. If this is not the correct "stack" for this question, kindly point me to the correct one on the exchange. Thank you.
If you are reading the accepted answer and it still isn't working, here's an additional tip. I had the EXACT same problem and saw that same link to clear the cache from numerous sites, bit it didn't work.
Here's the thing. I don't think that article lists ALL of the places that cache can be hiding on your machine. I deleted the cache folder in two different places on my machine and had given up on that as a solution.
Then I searched my entire hard drive for any folder with "Team Foundation" in the name and found a couple more buried in other hierarchies. Deleting these FINALLY solved the problem.
Here are some folders to look for, but like I said, check the entire drive
c:\users\yourlogin\AppData\Local\Microsoft\Team Foundation
c:\Program Files\Common Files\Microsoft shared\Team Foundation Server\
c:\users\yourlogin\AppData\Local\Temp\Microsoft\Team Foundation
The actual cache folder will be nested another level deep under a numbered folder named with something like "7.0" or "8.0" delete the cache folder from every one you find under every number.
In general cleaning the caches on your client machine will resolve such problems, including the TFS and VS caches...
To clean the caches, please see How to clear the TFS cache on client machines

SSIS + Excel Vs_needsnewmetadata error

I'm working in a project to load data from sqls erver into an excel file.
When working in my local machine, the package I'm working on is working perfectly.
However when I do a deployment I'm getting the error : failed validation and returned validation status "vs_neednewmetadata"
I'm using SSIS 2012 and Excel 2016
Any help will be appreciated
Thank you
This issues is always very complex to figure out.
Still when you have like this issue try to find if your sources or destinations have changed but not your deployed package. Indeed, when you update your connection tables DB or files please be sure that you deploy again
Also, you have to be carefull when you work with script component do not copy/past this component from others packages otherwise you will get the metadata error and you cannot resolve it.
In my case, the issue I had was related to destination Excel file. I have changed this file like erasing some lines, but I do not change its structure and I missed to copie this one in the server.
That mean event the changes are minor and do not affect the structure of the destination file, you have to update your server version file with the one you use in dev.

How to move a document from a pre-production to production instance in Kentico 7

We recently migrated a bunch of document updates up from a pre-production server to our production server. We'd attempted to use content staging, which had worked mostly OK in the past, but this time it failed with a lot of parent records not found errors. Our outsourced developer used the Documents tab of the Staging module to sync subtrees across. However a few files got missed, or didn't work correctly the first time. So I'm trying to move them now, and I'm running into a problem.
After expanding the content tree and clicking on the document in the Documents tab, and selecting the correct target server (we've got bi-directional staging set up), we're getting an error: Dependent task 'Move document See Things Differently' failed:Synchronization server error: Exception occurred: SyncHelper.ServerError: Document node not found, please synchronize the document node first.
Looking at the tasks listed, I don't even see a Move document task anywhere queued up for the target server.
Is there any way I can move this document up to our production instance? I've looked at the site export as an alternative, but it doesn't look like I can export just this one page. Am I going to have to recreate the page on Production instead?
The best way is to attempt this sync is to clear out all the staging tasks and do a full sync from the root of the website. Most likely what happened to some of the documents which are stating "moved..." is the pages were reordered. Which means every document below that document's parent will be updated on that level. So simply moving or reordering one document out of 10, will trigger 10 staging tasks. If you don't sync those to the production site, the order will be off according to the staging site.
I have had a problems similar to this before.
This typically works:
Create a copy of the document, put it in the same location in the
content tree.
Delete the original document.
Make any changes to the new document name, URL, aliases, etc (remove the '1' for example)
Then push that new document with Kentico Staging.
Its a bit of a hack but sometimes necessary.
Brenden is right on target about clearing the staging tasks listed under "All tasks" before you try syncing again. We've run into these errors on our sites when we've tried pushing a large number of docs from staging to production. What worked for us was deleting all pending and failed "Pages" tasks, then under the content tree in "Pages," navigating to the first child level and syncing "Current page" all the way to the closest parent directory, and then syncing "Current subtree."
For instance, if the problem doc is in, say, the "18" dir, select Articles and sync current page, then 2016, then 01, and for 18 sync current subtree.
content tree syncing screenshot
The best way is to use Kentico in-built Staging module and use that to first move objects and then the pages.
I have never faced any problem moving a large number of nodes(around 8000). That's the best possible approach.
In case your website a large no of custom table items let's say 50K, then I would do an export/import of the table. Synching so many entries usually has given Connection Time Out error before.
Thanks,
Chetan

Sql Schema Compare will not update after CLR object installed 'Source schema drift detected'

After installing a custom CLR object Sql Server Developer Tools (SSDT) VS2012 will not allow an update. The error is "Source schema drift detected. Press Compare to refresh. After refresh same thing happens.
Tried
In settings, I set the object to just Stored Procedures.
Settings ->General -> Block on possible data loss -> tried both on and off.
This sort of loop can also be caused by a referenced SSDT project failing to build. The referenced project may be missing, unloaded, or have an error which prevents the compare from completing.
This is not an answer but a clue to deal with this problem.
I was to update a colum from varchar[200] to varchar[MAX] and got this problem as well. So I logged in the server and tried to update the database manually via SQL Management Studio which was installed there, and I got this error:
"Saving changes is not permitted. The changes you have made require the folloing tables to be drpped and re-created. You have either made changes to a table that can't be re-created or enable the option Prevent saving changes that require the table to be re-created."
Seems that re-creating table is something so dangerous that "block/unblock on possible data lose" cannot handle. So I think only if we can walk around this LOCAL warning, could we update the database REMOTELY.
But, why [200] to [max] leads to re-creating table? It does not make any sense. I tried [200] to [1000], and it did not work as well. This might be the key to this problem.
And, if you do the same update in Server explorer in VS, instead of SQL Management Studio, it works. Again, why?
This can happen when a db user "changes".
The following rather scary forum page recounts issues where foreign hackers were trying to brute-force access to the "sa" db user, with each attempt changing the sa-user's date timestamp (which is seen as a schema drift):
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5c22a7b4-7a82-4717-a118-2475bc62705b/schema-compareupdate-error-target-schema-drift-detected?forum=ssdt
Here is also mentioned that you can query the sa-user a few times, to see if this is happening to you:
SELECT * FROM sys.server_principals WHERE principal_id=1
I am currently experiencing the same issue (that the sa-user is being modified; I know nothing about hackers yet) and am yet to find a solution.
Edit - I turned on logging in Windows Firewall via properties > logging, and we setup a blocking rule on port 3071, which had a lot of unexplained traffic. Then the problem went away.
I tried running VS as an administrator, it worked.

UMLModeler.closeModel(modelObj) is not closing the EMX File in Editor RSA 6.0.1

I am working on a project where we are using RSA 6.0.1.
I have to run the some set of tasks programmatically. I have open the emx file using UMLModeler.openModel(absoluteModelPath); Then do some editing and save through UMLModeler.getEditingDomain().run( new ResourceSetModifyOperation("Update Operation") {},Monitor); Then I refreshed the project through sourceProject.refreshLocal(IProject.DEPTH_INFINITE,monitor); till now things goes fine and finally when I am closing the model through UMLModeler.closeModel(objUMLModel); It is running this code but not closing the EMX file in the editor.
There is no error , no exception. Can any one please suggest me what can I do to close this emx file.
First, I would upgrade to 7.5.4 as the model concept goes away, in fact the method you are using is deprecated.
From the API documentation:
closeModel(Model model)
Deprecated. Since 7.5, use the closeModelResource(Element) method, instead
Using the newer methods might resolve you problems. Additionally, do you try refreshing the workspace? Either manually by right clicking on the project and selecting refresh or doing it with code.
Finally the most likely issues is that there are multiple 'handles' to the model. Closing yours does not close the editor or project explorer handle. I do not work for IBM so I cannot know this for sure. You could test this by opening it in the project explorer, open it with your code, then close it in the explorer manually, and only then try running your transaction on the model and then closing it. What does the explorer look like when it closes?
Or post more details and maybe I can code my own example. I would try the debugging first and also post this on the IBM developer works site. They are likely going to tell you to upgrade though. :)

Resources