How to create custom scheduler in sugarcrm? - cron

I am trying to create custom scheduler in sugarcrm using its documentation in
http://support.sugarcrm.com/Documentation/Sugar_Developer/Sugar_Developer_Guide_7.9/Architecture/Job_Queue/Schedulers/Creating_Custom_Schedulers/.
I have created job label in path ./custom/Extension/modules/Schedulers/Ext/Language/en_us.final_test.php
with code
$mod_strings['LBL_FINAL_TEST'] = 'Final Test Of Scheduler';
and created job function in path
./custom/Extension/modules/Schedulers/Ext/ScheduledTasks/final_test.php
with code
<?php
array_push($job_strings, 'final_test');
$GLOBALS['log']->fatal('my fatal message inside function');//this works
function final_test(){
$GLOBALS['log']->fatal('my fatal message inside function');//this don't
return true;
}
?>
Here if i put
$GLOBALS['log']->fatal('my fatal message outside function');
outside the function then it runs I get message in log file. But
when i put
$GLOBALS['log']->fatal('my fatal message inside function');
inside the function then this doesn't work and i don't get any log.
Which part am I doing wrong? where can I get proper tutorial to develop custom scheduler for sugarcrm?
NOTE: I have set the scheduler to run at every minute

I'd guess that your Schedulers are not running at all.
(Your "outside" message probably only makes it into the log whenever the file is loaded in general)
Make sure your cron jobs are configured correctly, as they are required to call Sugar's Scheduler Engine every minute: https://support.sugarcrm.com/Knowledge_Base/Schedulers/Introduction_to_Cron_Jobs/
If you don't feel like setting them up, you could also manually trigger Schedulers with php -f cron.php (on web service account e.g. sudo -u www-data php -f cron.php if on Debian linux) in your Sugar directory.
If your function's output still doesn't appear in the logs:
Check if your current function is in custom/modules/Schedulers/Ext/ScheduledTasks/scheduledtasks.ext.php. If not, run a Quick Repair & Rebuild.
Check file permissions on the log file
Check your PHP log/output for errors. E.g. in case you defined a function called "final_test" somewhere else already, PHP would terminate with a fatal error due to a function name collision.

Related

stackdriver logging agent not showing logs read from a custom log file in stackdriver logging viewer on Google cloud platform

I decided to post this question because, I have ran out of debugging ideas, just ideas are golden since I know it can be difficult to help debugging a virtual instance through here (debugging code is hard enough jaja). Anyway, I have created a virtual machine in Compute engine , I created a logs file that I populate, for example, with this command in a python script, let's call it logging.py:
import logging
logging.basicConfig(filename= 'app.log' , level = logging.INFO , format = ' %(asctime)s - %(name) - %(levelname)s - %(message)s')
logging.info('Some message ' + str(type(variable)))
everytime I use python3 logging.py , the app.log is effectively populated. ( Logging.py and app.log are in the same directory the /home/username/ folder )
I want stackdriver to show this log in the logging viewer everytime it's written, so , I installed the stackdriver agent as follows, in the virtual machine command line:
$ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
$ sudo bash install-logging-agent.sh
No errors that I see are delivered here, in fact, you can see here the messages obtained
Messags on the stackdriver viewer:
After this, I proceed to create a .conf file that I create in /etc/google-fluentd/config.d/app.conf
with this parameters
<source>
type tail
format none
path /home/username/app.log
pos_file /var/lib/google-fluentd/pos/app.pos
read_from_head true
tag whatever-tag
</source>
After that is created, I launch sudo service google-fluentd restart.
Aftert I execute, python3 logging.py , no logs are added to stack drivers logging viewer.
So, where might Have I gone wrong?
Things I have tried/checked:
-Have more than 13 gygabytes of RAM available
-If I run logger "some message" on the command line, I effectively add a log with "some message" to the log viewer
-If I run
ps ax | grep fluentd
I obtain :
3033 ? Sl 0:09 /opt/google-fluentd/embedded/bin/ruby /usr/sbin/google-fluentd --log /var/log/google-fluentd/google-fluentd.log --no-supervisor
3309 pts/0 S+ 0:00 grep --color=auto fluentd
-Both my user, and the service account I use, have logger admin permission in IAM roles.
-This is the documentation I have based myself on:
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419
https://cloud.google.com/logging/docs/agent/configuration?hl=es-419
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3
https://cloud.google.com/logging/docs/agent/installation
-If I run sudo service google-fluentd status , the agent appears active.
-My instance hass access, to all the apis. It's an n1-standard-4 (4 vCPUs, 15 GB of memory) using ubuntu linux 18:04
So, what else can I check to debug this? I'm out of ideas here , hope I'm not being an idiot here :(
Based on my understanding, I think that you looking for the following fluentd resource types:
generic_node
“A generic node identifies a machine or other computational resource for which no more specific resource type is applicable. The label values must uniquely identify the node.”
generic_task
“A generic task identifies an application process for which no more specific resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task.”
The source of my information has been found here
This document explain how to send logs from your application in different ways:
Cloud Logging API
Cloud Logging Agent
Generic fluentd
As you mentioned having installed fluentd, let me provide more focused documentation about Cloud Logging Agent. I also found some python Client Library documentation that you may be interested.
Finally, I found a nginx/apache use-case guide that you may use as reference.
For some reason, if I change the directory to which both the .conf file points, and the directory where the logg is to /var/logs/ , being the final path as /var/logs/app.logs, it does work correctly. Possibly there is a configuration issue, causing the logging agent to only capture logs in specific predetermined folders, or a permissions issue that stops it from working if the log is in the username directory.
I found this solution, however, by chance(random testing basically.
). Did not find anything in the main articles that are supposed to teach me how to configure the logging agent, that could point me in the right direction, being those articles this ones,
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419 https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419 https://cloud.google.com/logging/docs/agent/configuration?hl=es-419 https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/installation
If I needed it to work in my username directory, it's not clear just by checking this articles how to do it,what configuration file I would need to change or where to start, so I recommend to google to improve that aspect of the docs.
This documentation you have sent https://docs.fluentd.org/quickstart is pretty interesting, maybe I can find the explanation there, thank you for your help.

mergExt - mergFTPD external - any callback message or result after successfully starting ftp daemon?

Is there a way to get information back from mergFTPD (mergExt component), if the ftp daemon was started successfully?
Or do i have to check if ftp deamon is running with standard LC commands in the iOS app?
For example
after starting the deamon put a file into the defined folder using a put url "file:...
and then trying to get that file back with a put URL "ftp://..... "
this works, but using a callback message or getting a result after starting the daemon would be
more comfortable.
Unfortunately there's no callback I can use here

Setting up cron job with Bolt on Plesk 11.5.30

I'm trying to setup a cron job with Bolt.cm via the Plesk 11.5.30 admin console.
Bolt task scheduler documentation: https://docs.bolt.cm/v20/tasks
Scheduled Tasks
I have setup crontab for the apache user as follows:
Min H DM M DW Command
0 */1 * * * /var/www/vhosts/mysite.com/httpdocs/app/nut cron
0 */1 * * * httpdocs/app/nut cron
The reason for two different commands here is that I've read in another thread troubleshooting Plesk that the command path should be relative from the domain. I assume if the first one fails, the second will still run. I also assume the above will run cron every hour.
Bolt Config
In my config file the cron_hour is set to 3am. However as the listener is set for an hourly event CRON_HOURLY I assume the setting is bypassed/ignored. Either way this setup has been running for over 48 hours and no effect has yet been witnessed.
Bolt Setup
As a test I have added the following into my Bolt extension so that it should simply send me an email when it is run (the following has been cut down to keep it short).
use \Bolt\CronEvents;
class Extension extends \Bolt\BaseExtension
{
function initialize() {
$this->app['dispatcher']->addListener(CronEvents::CRON_HOURLY, array($this, 'cronNotifier'));
}
function cronNotifier() {
// send email script...
}
}
I've tested my email script individual so I know it works. With the cron job setup and assumed to be running this function is never hit. What's more is that in the database the bolt_cron table remains to be empty.
If anyone could help suggest some things I could try to get this running it would be greatly appreciated.
THE FIX
In order to fix this issue I altered my schedule to execute every 5 minutes (quicker debug loop). I then made the following discoveries:
I had to set execute permissions on public for the nut file (so apache user could execute it)
for the server I'm using only the full path version works
Thanks #Gawain for prompting me in the right direction.
Scheduled Tasks
Yes, that looks correct. The handling of directory locations was enhanced recently, but you used to need to call nut from in the Bolt directory.
Bolt Config
cron_hour only applies to daily, and longer periods. There was a bug I fixed recently that was blocking out midnight to 3 am hourly tasks.
OK, there should be traces logged in the Bolt log about running cron events.
If you run this, do you see the hourly job trigger and your email sent?
./app/nut cron --run=cron.Hourly
If that works, then the problem is with the system cron/crontab, if editing directly were you using crontab -e?
Also if you're on an older checkout of master, you can try to prefix the crontab command section with
cd /var/www/vhosts/mysite.com/httpdocs &&
...before each invocation of ./app/nut

program crashes when called by task scheduler

I have written a program in c# that includes a bit of file io and SystemEvents Switches.
The program runs fine if I run it from explorer, but when i call it at log on from task scheduler, it crashes. Any clues as to why this would be happening?
Have you tried opening task scheduler as Administrator - e.g. right-click run as Administrator.
It could be due to permissions. You can view the history of the task in the history tab to view this.
Make sure the correct permissions have been set for the exe, as you mentioned "Log in" from task scheduler - what permissions does this use and are they the same as when you run the exe manually?
try :
1 - try cactch block with some logging
try
{
// .. youre code
}
catch (Exception ex)
{
//TODO: logging
}
2 - does eventviewer give you a clue
[windows key] + [r] => eventvwr
Thanks for the try-catch suggestion, lordkain.
The error was thrown when trying to access an external icon file. The fix was as simple as adding the appropriate file path to the "Start In:" field in task scheduler.
Are you scheduling it from a different folder? Be sure to copy any DLLs over as well as the .exe. I've made this mistake before!

MsDeploy remoting executing manifest twice

I have:
Created a manifest for msdeploy to:
Stop, Uninstall, Copy over, Install, and Start a Windows service.
Created a package from the manifest
Executed msdeploy against the package against a remote server.
Problem: It executes the entire manifest twice.
Tried: I have tinkered with the waitInterval and waitAttempts thinking it was timing out and starting over, but that hasn't helper.
Question: What might be making it execute twice?
The Manifest:
<sitemanifest>
<runCommand path="net stop TestSvc"
waitInterval="240000"
waitAttempts="1"/>
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<dirPath path="C:\msdeploy\TestSvc\TestSvc\bin\Debug" />
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<runCommand path="net start TestSvc"
waitInterval="240000"
waitAttempts="1"/>
</sitemanifest>
The command issued to package it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:manifest=c:\msdeploy\custom.xml
-dest:package=c:\msdeploy\package.zip
The command issued to execute it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:package=c:\msdeploy\package.zip
-dest:auto,computername=<computerNameHere>
I am running as a domain user who has administrative access on the box. I have also tried passing credentials - it is not a permissions issue, the commands are succeeding, just executing twice.
Edit:
I enabled -verbose and found some interesting lines in the log:
Verbose: Performing synchronization pass #1.
...
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
After these lines, files aren't copied the first time, but are copied the second time
...
Verbose: The dependency check 'DependencyCheckInUse' found no issues.
Verbose: Received response from agent (HTTP status 'OK').
Verbose: The current synchronization pass is missing stream content for 2 objects.
Verbose: Performing synchronization pass #2.
...
High Level
Normally I deploy a freshly built package with newer bits than are on the server.
During pass two, it duplicates everything that was done in pass one.
In pass 1, it will:
Stop, Uninstall, (delete some log files created by the service install), Install, and Start a Windows service
In pass 2, it will:
Stop, Uninstall, Copy files over, Install, and Start a Windows service.
I have no idea why it doesn't copy over the files in pass 1, or why pass 2 is triggered.
If I redeploy the same package instead of deploying fresh bits, it will run all the steps in pass 1, and not run pass 2. Probably because the files have the same time stamp.
There is not enough information in the question to really reproduce the problem to give a specific answer... but there are several things to check/change/try to make this work:
runCommand needs specific privileges
waitInterval="240000" and waitAttempt="1" (double quotes instead of single quotes)
permissions for the deployment service / deployment agent regarding directories etc. on the target machine
use tempAgent feature
work through the troubleshooting section esp. the logs and try the -whatif and -verbose options
EDIT - after the addition of -verboseoutput:
I see these possibilities:
Time
Both machines have a difference in time (either one of them is just a bit off or some timezone issue...)
Filesystem
If one of the filesystems is FAT this could lead to problems (timestamp resolution...)
EDIT 2 - as per comments:
In my last EDIT I wrote about timestamp because my suspicion is that something goes wrong when these are compared... that can be for example differring clocks between both machines (even a difference of 30 sec can have an impact) and/or some timezone issues...
I wrote about filesystem esp. FAT since the timestamp resolution of FAT is someabout 2 seconds while NTFS has much higher resolution, again this could have an impact when comparing timestamps...
From what you describe I would suggest the following workarounds:
use preSync and postSync for the Service handling parts (i.e. preSync for stop + uninstall and postSync for install + start) and do only the pure sync in the manifest or commandline
OR
use a script for the runCommand parts
EDIT 3 - as per comment from Merlyn Morgan-Graham the result for future reference:
When using the runCommand provider, use batch files. For some reason this made it stop running two passes.
The problem with this solution is that one can't specify the installation directory of the service via a SetParameters.xml file (same for dontUseCommandExe / preSync / postSync regarding SetParameters.xml).
EDIT 4 - as per comment from Merlyn Morgan-Graham:
The timeout params apply to whether to kill that specific command, not to the closing of the Windows Service itself... in this case it seems that the Windows Service takes rather long to stop and thus only the runCommands get executed without the copy/sync and a new try for the whole run is initiated...
I had the same problem, but I don't make package.zip file.
I perform synchronization directly in one step.
The preSync/postSync solution helped me a lot and there is no need to use manifest files.
You can try the following command in your case:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-preSync:runCommand="net stop TestSv && C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe",waitInterval=240000,waitAttempts=1
-source:dirPath="C:\msdeploy\TestSvc\TestSvc\bin\Debug"
-dest:auto,computername=<computerNameHere>
-postSync:runCommand="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe && net start TestSvc",waitInterval=240000,waitAttempts=1
"-verb:sync" parameter means you synchronize data between a source and a destination. In your case your case, first time you perform synchronization between the "C:\msdeploy\TestSvc\TestSvc\bin\Debug" folder and the "package.zip". Plus, you are using manifest file, so when you perform second synchronization between the "package.zip" and the destination "computername", msbuild uses previously provided manifest twice for the destination and for the source, so each manifest operation runs twice.
I used the && trick to perform several commands in one command line.
Also, in my case, I had to add timeout operation to be sure the service were completely stopped ("ping -n 30 127.0.0.1 > nul").

Resources