We are looking at upgrading from Kofax Capture 8 to 10.
Currently, our Kofax Capture 8 installation on a single server with 3 connected client scanner workstations.
When going to Capture 10 can we use the same single server setup as with the current installation? or install new kofax server in new VM and then copy the batch class and batches into the new server?
What are the things we need considering while upgrading?
Anything that individuals can comment on that would help us (do's -don'ts) would be appreciated?
Thank you.
Since you're in a virtual environment and with a fairly small number of stations, I suggest you create a snapshot of the server (VMWare and Hyper-V both have this feature) and do an upgrade with the Kofax installer (start installing on KC 10 on the server and the installer will notice that an old version is present and will prompt you for upgrade).
If the upgrade goes wrong then you can revert to the snapshot you created and start over as if nothing happened.
Once the server is OK, upgrade the stations by running the installer from the WrkInst folder on the server. Try on a single station first, and if that works, roll out to the rest.
When going to Capture 10 can we use the same single server setup as with the current installation?--- use can use the same single server.
Things you need to take care of are.
Your configuration path for importing the batches might get change.
Check for the DB connection.
Dictionaries and database path you might need to change again.
check your export connectors mapping they should not be effected in case they are you need to map it again.
Keep a snap shot of things when your installing and have backup taken of project fpr file and batch class so that you can roll back to older changes.
Related
After upgrading system to Windows 10 - os 1803 we are getting below issues while working with ClearCase 8.0.1.x/9.0.1.x
Unable to checkin/checkout.
Not able to create views.
Not able to add any file to source control.
The system hangs & crashes while performing any ClearCase operation.
There is no error message, but I have attached screenshot for reference.
Please let us know if there is any issue with the Windows 10 ver(1803), any security system enabled?
Or has ClearCase provided any fix?
We have tried 9.0.1.5 and issue still persists.
This is what we got from windows event log.
The computer has rebooted from a bugcheck.
The bugcheck was:
0x000000c2 (0x0000000000000004, 0x00000000535be990, 0x000000000004efd3, 0xfffff803e01848b1)
for most of them whoever has upgraded to windows 1803 ver :( for people who are still using ver1709 it is working perfectly fine
Then I would recommand contacting IBM support: only them can update their ClearCase 9/Windows 10 compatibility matrix and confirm if MVFS is supported on a more recent (1803) Windows 10 edition.
We also facing same problem and I have raised the case with IBM. Still not yet resolved. As IBM said there are some limitations to work ClearCase with windows 10 and windows 2016.
We tried all the options except Secure boot disable. If possible please do disable secure boot option in Windows 10 and try to checkin/checkout code from CleraCase
Note : It works for Snapshot views. That means the issue related to MVFS
I'm seconding #VonC's recommendation to open a ticket with IBM. When you do that, save a step and collect a clearbug2 and a kernel memory dump to send in as soon as the case is opened. It will save the turn-around time of us asking you for it. If the installed programs list doesn't list installed security software (DLP, Privilege management sw like Avecto, other endpoint security tools), please list those separately as well.
I would also love to know who # IBM told you there are "limitations" with Win10-1803.
There are a few issues with Windows 10 "version upgrades" breaking things, but they generally don't cause system crashes. Windows 10 upgrades are actually full OS installs that then (imperfectly) migrate application settings. Anything that uses custom network providers (ClearCase is one example) will find that the network providers will be broken or partially broken. Reinstalling is usually required. Again, that has not yet been reported as a cause of a BSOD.
If the upgrade/reinstall didn't fix view creation, please post a separate question on the view creation issue. There may be things we can do to the SMB 2 caches to allow view creation to work in cases where the view storage is not on the client host.
I noticed that the screen shot you posted is a Terminal Services disconnect screenshot. Does the issue only occur over a Terminal Services client connection or does it also happen on a local connection?
I would like to install Lotus Notes 7/designer/client/admin on same computer probably in other drive, where we have already installed Lotus notes 9. therefore I would like to know if this would create any conflict ? or any probable issues ? or some one can please explain me how to do that any guidance.
You can avoid the pitfalls, by installing a virtual system on your machine (VMWare or VirtualBox) and running it from there. When you use a shared folder from the host system, all your data will be directly accessible on the host. You can have several virtual machines, each for one version of Notes, or for one client needing special setup or different user ids.
As Notes 7 (and all versions before) do not even need an installation to work the answer is easily given: just install the client to another machine (even a virtual machine) and copy over Program + Data Diectory to your pc. Then you only need to adjust notes.ini (if you changed path when copying) and create a shortcut to launch notes7. Create a shortcut to Notes7ProgramDirectory\nlnotes.exe =Notes7ProgramDirectory\notes.ini
Now you can launch Notes7 after Notes9 is already launched. The other way around will not work: if notes7 is already started, then Notes9 will not start in parallel.
I'm working with multiple Liferay Projects (different Portal, plugins, user and usergroups etc ) in the same time, and often have to switch between them. This switch requires lots of steps like
Editing the portal-ext.properties (to change the Liferay Database, and edit some custom project-specific properties), and edit 'portal-setup-wizard.properties'
Add/remove portlets themes and hooks from the Eclipse Server instance, sometimes clean the Tomcat's 'data' 'Webapps' and 'work' folder
Go to Liferay's Control-Panel/Server/Plugins Installation and re-index portlets like 'Users and Organizations' or 'Documents and Media'
So, I thought that creating a new Server Instance for each project, with a new tomcat and JRE, would be a nice idea. When I had to switch project, I could just stop the old server and start another. At first, I thought (was adviced actually) that using the same Liferay Plugins SDK (6.1.0), should be ok, as long as the Server instances are the same version.
Practically this doesn't work 100% perfectly. While most of the work is getting done, there are some problems here and there, like a theme not getting deployed propertly, hooks not beeing applied etc. As I understand, there is some [Liferay SDK] - [Liferay Server] binding, and that means that only 1 Server (the first one I created) will fully work.
For example, By investigating the [Liferay SDK folder]/bild.[user name].properties, I can see some properties that are referring to a specific Server/JRE location :
app.server.portal.dir
app.server.lib.global.dir
app.server.deploy.dir
app.server.type
app.server.dir
So, my question is, what should I do to work with multiple Liferay Projects ?
Is the multi-Server practice, a good approach to work with multiple-projects ?
If yes, should I create a different SDK for each Server? Maybe a different Eclipse workspace too ? Or is there some way to use the same SDK
What about working with Servers of different Liferay Version ?
Personally, I set up every project with its own source, tomcat, database, etc. even if it means duplication. These days storage is cheap and makes this possible. Of course your milage may very but I thought I'd share my setup with you.
I have a project directory with all my projects which looks like so:
/projects
/foo-project
/bar-project
/my-project
Inside a project I have
/my-project
/tomcat
/bin
/conf
...
/src
/portal
... my portal source ...
/plugins
... my plugin source ...
/portal-ext.properties
I then setup tomcat to use different ports (8080, 8081, 8082, etc...) so that I can just leave them all running if I have to or want to.
I setup Liferay to use different database for each Liferay instance.
I place the portal-ext.properties as a sibling to the tomcat directory and Liferay will read this file (assuming the default behavior). This offers quick and easy edits as well as figuring out how you've set each project up.
The advantages should be clear. You can just "walk away" from a project and into another without tearing down and setting up. And when you return everything will still be as you left it. Context switching is also quicker and helpful if you want to answer a question about a project you're not yet working on.
Depending on the complexity of each of your projects, multi-instance might not work for you. Hooks and EXTs may conflict with each other and it appears as if this is already the case with your projects.
If you can afford the space (which is not much) this has been the fastest way I have found as a Liferay developer.
If we start working on a new Liferay project in our company, we setup:
a new database schema,
a new, clean Liferay server connected with that schema and
a fresh Eclipse workspace, with
a clean SDK project
Only this way you're sure to have cleanly separate projects. To switch to another project, just shutdown the current Liferay server, startup the new one and switch to the right workspace in Eclipse. This all takes no more than 2 minutes, a lot less than to do all the cleanup actions you have to do if you share workspace and server.
In my opinion, this is the approach of most development teams.
Why mess with all these complications in a single computer? I use Oracle VirtualBox and set up a separate VM for each project. Even though I work on a laptop, it has 8 cores, and I've bumped my memory up to 16GB and set each machine up with 4GB of RAM.
I can have multiple VMs running at once and have set all active projects as home pages in Chrome. Using bridged networking each VM has its own IP address, and they all listen on 8080.
Another benefit is that, although my primary project is being developed using Eclipse Indigo and LR 6.1 CE GA1, I have another using Eclipse Juno, its specific IDE plugin and LR 6.1.1 CE GA2. So it also works as a new version tester.
VirtualBox is free. Memory is cheap. And remember that you can put a VM to sleep without shutting it down. That takes about 10-20 seconds and waking it up again takes 30-60 seconds.
The simplest solution would be :
Create 3 different users, the Liferay SDK's bundle.properties file is separate for each user. So, lets say, if you want to run 3 servers with the same sdk. Create 3 files like
bundle.user1.properties
bundle.user2.properties
bundle.user3.properties
Now, when you want to deploy something for server 1, log in the server using user1 and try to deploy the portlet, this will read bundle.user1.properties and it will deploy the portlet/hook to the specified location.
Hope this will resolve your deployment issue.
Also, when you have 3 users, you can run 3 different servers together in a different user accounts, in this way, they would be secure and apart from admin, nobody can shutdown the same.
Hope this helps!
I have a small solution that is composed out of 2 main projects a Mvc4 Web Api and a silverlight 5 Application. I've configured and deploy the application initially on the Azure platform and it all went great, but ever since when I deploy again the silverlight project does not get pushed and the online site has the old version.
I should mention all works great with the azure simulator on my local dev machine.
Anybody had a similar issue?
Regards,
I would suspect first (as Simon suggests) that the browser likely still has the previous client cached and loads that instead of downloading your new client.
You can use the version number in the code on your page that hosts the silverlight app to help. While it's easy for you to clear the cache - you don't really want to have to tell users to do that whenever you update.
Set the version to whatever your latest assembly version is (silverlight client project assembly), this will force the browser to download the client if the cached version is a lower number.
<param name="source" value="AppPath/App.xap?version=2.0.0.6"/>
Ok,
So after pulling my hair out, I finally figured out.
I have to change the build configuration to release in VS do a rebuild and then do publish because apparently the azure project does not do rebuild on the project when you publish it.
To solve this issue you'll need to identify the source of the problem (is it a client side problem where you have a caching issue or not). Even though you say caching isn't the problem we'll need to be sure about this first.
What I suggest is that you do the following first:
Activate Remote Desktop on your role
Connect through RDP and save this file to the role: http://support.microsoft.com/kb/841290 (fciv.exe)
Find the *.xap file (usually in E:\sitesroot) and get its checksum (using fciv.exe)
Modify the Silverlight project locally (maybe change a label or move around an element) to make sure its hash has changed.
Redeploy the application
Connect through RDP and use fciv.exe to get the checksum of the *.xap file once again
Compare both checksums
If the checksums are different, then it means that the deployment worked correctly and the Silverlight xap has been updated. If the checksum is the same, the problem lies with the deployment.
Please let us know the result so we can help you find the solution.
I have a question regarding to Setup Projects in .Net (c# language, Framework 4.0):
I made a setup project for a Windows Service, on the installation wizard, the user must input the name of the Windows Service as it would be installed. The setup program also creates a shortcut to the Uninstall program in case the user wants to remove that Windows Service.
The question is: how to let the user run the same setup program several times specifing different service name?
This behaviour could be required because the windows service is a socket consumer that connects to a server and retrieves data; to take advantage of the server capabilities the user could install the same windows service multiple times pointing to a different port on the server, to perform the data retrieving task much faster. The service is the same, the user just modify the port on the configuration file of the service, so that's why it's not logical to create a new version of the installer each time.
Any clue or suggestion would be appreciated, thanks in advance.
This can be done by using an multiple instances installation. The general approach is:
create a transform for each instance you want available to the user
use a custom EXE bootstrapper which applies a new transform to your MSI package each time a new instance is installed
The transform should change at least the PackageCode, ProductCode and UpgradeCode.
This is not supported by Visual Studio setup projects. So either you do it manually or use a commercial setup authoring tool which supports multiple instances.