Lotus Notes not running as scheduled - lotus-notes

I have 3 agents in Lotus, these agents just update different CSV files on a shared drive. Based on their logs, they are running but only took a second. Checking the CSV files, they are not updating.
I've tried to adjust the schedule time
Tried other servers
Changed the Target
Disable/re-enable the agent
Made a copy of the agent
I haven't edit the code.
Workaround is to run these agents manually. It actually updates the CSV files and its taking at least 5 minutes for the agents to finish running which is expected. These agents just suddenly stop running as scheduled.

As Torsten mentioned, your Domino server does not have enough permissions. Per default it runs as local system, which does not have access to any shares.
See this technote before it disappears https://www.ibm.com/support/pages/domino-server-unable-locate-mapped-drives-when-started-windows-service

Related

Trigger window Logon on a remote machine as a different user

i have been searching around for a while to find a way to Trigger a logon on a remote machine as a different user.
This is for an Blueprism RPA requirement. We have few virtual machines that run RPA processes and these machines will need to be logged in with the bot account for the processes to run. We have a Login agent that can be used to trigger logons on the machines, but they need to be done per machine basis which can sometimes be time consuming.
I can remote login to those machine to initiate the logons, but the automation fails if I close the session due to some display thingy.
If there is something like a command that I can trigger from my CMD that would do the job for me would be of great help
TIA
If you'd like to ensure that the machine is logged in, before the process start, then you can build it in into the scheduler.
Set the first step in the process as "login" and no matter if it completes or fails, after set amount of time run the process.
Finally managed to get this done using AutomateC.exe utility that comes with Blueprism. You can pretty much run any process on any VM and also specify input parameters. This is pretty handy when there is a need to interact with too many VMs.

Hybris hotfolder import "pause"

Is there any workaround in Hybris for hotfolders to be paused manually?
Here is the story and the problem: In production we have 2 data import servers for our application. Both servers use the same shared hotfolder for multiple different hotfolder configurations, and thus importing several different incoming files day by day. Our production deployment for these servers are differ from the other (non data import) servers of the application, because we have to wait for these servers to finish the actual in process file import, so we have to manually check the /processing folder over and over again 'till there isn't any file. Our goal is to skip this manual process, instead just "tell" the Hybris to stop the processing after the currently in progress imports.
Is there any OOTB implementation to do this?
AFAIK no there is not. You can probably fix your issue by creating a temporary "work in progress" file while the import starts then you delete the file when it ends. You can then automize the manual check with a method that check if there is a "work in progress" file.

Archiving files with Python

I wrote this script that runs several executable updates in a shared network folder. Several separate machines must run these updates.
I would like to archive these updates once they are run. However, as you may see the dilemma, if the first machine runs an update and archives the executable,
the rest of connected devices won't run as they will no longer appear in the cwd. Any ideas?
It took me a while to understand what you meant by "archiving", but probably moving to another folder on a network shared mount. Also, the title should definitely be changed, I accidentally marked it as OK in Review Triage system.
You probably want to assign some ID to each machine, then have each of them create a new file once they finish the installation (e.g. empty finished1.txt for PC with ID 1, finished2.txt for PC 2 etc.). Then, one "master" PC should periodically scan for such files, and when finding all it expects, deleting/moving/archiving the installers. It may be good idea to add timeout functionality to the script on master PC, so when one of PCs will get stuck, you will get notified in some way.

puppetrun is automatically called in foreman for every 3 min. how to stop this

Please check the attached screenshot, puppetrun is called repeatedly for every 3 min and its keep on running certain critical services repeatedly. I want puppetrun to be called on demand. not an automated call like this.
In puppet.conf file, Do I have to enable any parameters to stop this repeated puppetrun invoke ? or I have to make any changes in foreman ?
or is it the puppetmaster triggering this call on all clients ?
Do you mean every 30 minutes? The screenshot shows only three runs, possibly within a two hour period.
Usually the Puppet agent is running as a service which can be stopped and disabled - commands vary depending on your OS. For a systemd-based OS:
systemctl stop puppet
systemctl disable puppet
service puppet stop should at least stop it on others, check your OS docs for commands to disable the service at startup.
However when using Puppet, it should be perfectly safe for the agent to run continually and ensure state. The catalog shouldn't be affecting a service every time it runs, it suggests an error.

Why is there activity on our FTP server while Cloudberry FTP backup says job is finished?

Here is the setup
We are testing Cloudberry for backing up files to a remote FTP server
As a test we are backing up files on a desktop, using Cloudberry FTP to a FTP server (FileZilla server) located on the same desktop. The FileZilla Server in turns is accessing a Synology NAS located on the same network.
The job is set to run every 24 hours
According the Cloudberry interface, it was last run at midnight and latested 1h 31min
There are no running jobs showing in Cloudberry interface
HOWEVER, it is 9AM , FileZilla server is still showing files upload. Filezilla has a counter to keep track on the number connection. The count is currently at 1.2million, but thereare only ~ 70,000 file being backed up.
I deleted the job and created a new one with the same result
So what is going on?
Alex
Found the root cause of this issue.
By looking through the logs in %programdata%\CloudBerryLab\CloudBerry Backup\Logs, I found that a Consistency job was running every hour...
No matter how many times I checked the Backup Job definition, this setting was never shown as it is only displayed in the Welcome tab, not the Backup Plans tab...
Changed the Consistency job to running weekly.
Hope this will help somebody else
Note: Disappointed with the lack of Support from CloudBerry given that Stackoverflow is officially their Support page as per http://www.cloudberrylab.com/support.aspx?page=support

Resources