program crashes when called by task scheduler - c#-4.0

I have written a program in c# that includes a bit of file io and SystemEvents Switches.
The program runs fine if I run it from explorer, but when i call it at log on from task scheduler, it crashes. Any clues as to why this would be happening?

Have you tried opening task scheduler as Administrator - e.g. right-click run as Administrator.
It could be due to permissions. You can view the history of the task in the history tab to view this.
Make sure the correct permissions have been set for the exe, as you mentioned "Log in" from task scheduler - what permissions does this use and are they the same as when you run the exe manually?

try :
1 - try cactch block with some logging
try
{
// .. youre code
}
catch (Exception ex)
{
//TODO: logging
}
2 - does eventviewer give you a clue
[windows key] + [r] => eventvwr

Thanks for the try-catch suggestion, lordkain.
The error was thrown when trying to access an external icon file. The fix was as simple as adding the appropriate file path to the "Start In:" field in task scheduler.

Are you scheduling it from a different folder? Be sure to copy any DLLs over as well as the .exe. I've made this mistake before!

Related

Azure Functions - PowerShell - "The pwsh executable cannot be found at ..."

When my PowerShell Azure Function runs using the Test/Run feature in the portal, I get this error in the connected console output.
The pwsh executable cannot be found at "C:\Program Files (x86)\SiteExtensions\Functions\3.3.1\workers\powershell\7\runtimes\win\lib\netcoreapp3.1\pwsh.exe"
Note that 'Start-Job' is not supported by design in scenarios where PowerShell is being hosted in other applications. Instead, usage of the 'ThreadJob' module is recommended in such scenarios
My script looks something like below.
Note the invocation of the web request does indeed fail with an HTTP 500, triggering, I assume the catch block and the if.
try {
Invoke-WebRequest ...
}
catch {
$exc = $_;
}
if ($null -ne $exc) {
Writing-Warning "This failed when something blah.";
throw $exc;
}
This is the gyst. The real script actually makes a few web requests, any of which could fail. I want to ensure they all get executed, so I catch and then store the exception, and then only at the end the script can throw and fail, and my hope is at least one of the problems makes it out into logging or somewhere in the portal or something.
The actual message looks like this. It smells like an Azure problem to me.
I just ran it again and it fixed itself. Thanks for wasting my time, Azure.

How to create custom scheduler in sugarcrm?

I am trying to create custom scheduler in sugarcrm using its documentation in
http://support.sugarcrm.com/Documentation/Sugar_Developer/Sugar_Developer_Guide_7.9/Architecture/Job_Queue/Schedulers/Creating_Custom_Schedulers/.
I have created job label in path ./custom/Extension/modules/Schedulers/Ext/Language/en_us.final_test.php
with code
$mod_strings['LBL_FINAL_TEST'] = 'Final Test Of Scheduler';
and created job function in path
./custom/Extension/modules/Schedulers/Ext/ScheduledTasks/final_test.php
with code
<?php
array_push($job_strings, 'final_test');
$GLOBALS['log']->fatal('my fatal message inside function');//this works
function final_test(){
$GLOBALS['log']->fatal('my fatal message inside function');//this don't
return true;
}
?>
Here if i put
$GLOBALS['log']->fatal('my fatal message outside function');
outside the function then it runs I get message in log file. But
when i put
$GLOBALS['log']->fatal('my fatal message inside function');
inside the function then this doesn't work and i don't get any log.
Which part am I doing wrong? where can I get proper tutorial to develop custom scheduler for sugarcrm?
NOTE: I have set the scheduler to run at every minute
I'd guess that your Schedulers are not running at all.
(Your "outside" message probably only makes it into the log whenever the file is loaded in general)
Make sure your cron jobs are configured correctly, as they are required to call Sugar's Scheduler Engine every minute: https://support.sugarcrm.com/Knowledge_Base/Schedulers/Introduction_to_Cron_Jobs/
If you don't feel like setting them up, you could also manually trigger Schedulers with php -f cron.php (on web service account e.g. sudo -u www-data php -f cron.php if on Debian linux) in your Sugar directory.
If your function's output still doesn't appear in the logs:
Check if your current function is in custom/modules/Schedulers/Ext/ScheduledTasks/scheduledtasks.ext.php. If not, run a Quick Repair & Rebuild.
Check file permissions on the log file
Check your PHP log/output for errors. E.g. in case you defined a function called "final_test" somewhere else already, PHP would terminate with a fatal error due to a function name collision.

Simulate Webjob Shutdown for debugging

Scenario:
I have hooked up the Web job with a CancellationToken and need to simulate shutdown to see if the cancellation is being processed successfully. I've tried the Ctrl + C combination but the cancellation did not fire. What is the correct way of simulating this shutdown for debugging purposes?
Since this is debug code, did it with a bit of a hack. The issue in my case is, the CancellationToken was being passed by a framework call and did not allow access to the CancellationTokenSource.
private async Task InitializeEventProcessing(CancellationToken ctx)
{
#if DEBUG
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
ctx = cts.Token;
#endif
.
.
.
}
Didn't really like the other answer, so here is what I did...
The built in WebJobsShutdownWatcher looks for an environment variable WEBJOBS_SHUTDOWN_FILE and watches for changes.
If you configure the debug settings to provide a file name for that environment variable, then all you have to do create that file(contents don't matter) and it will follow the same shutdown path as if it is deployed.
Of course you have to delete it afterward, or add a build step to delete it on debug or something, but at least it's executing the same code as graceful shutdown on the host.

Why is my webjob terminating without throwing an exception?

My azure webjob appears to be terminating without throwing an exception and I'm lost.
My web job is run on-demand (or scheduled) and has a dependency on my web site DLL (and MVC app). It calls into it to do most of the work, which includes working with an entity frameworks database and making REST calls to several other sites. Most of the work is done asynchronously. Most of the code used to do this work is also called from other parts of the site without problem, and it goes without saying that the web job works flawlessly when run locally.
The web job terminates and doesn't seem to throw an exception when it does and it doesn't seem to be possible to debug a web that's not of the continuously run variety (?). Therefor, my debugging has mostly been of the Console.WriteLine variety. Because of that and the asynchronisity, I haven't been able to nail down exactly where it's crashing - I thought it was while accessing the database, but after mucking with it, the database access started working.. ugh. My next best guess it that it dies during an await or other async plumbing. It does, however, crash within two try/catch blocks that have finallys that log results to redis and azure storage. None of that happens. I can not figure out, or imagine, how this process is crashing without hitting any exception handlers.. ?
Anyone had this problem with an azure webjob? Any idea what I should be looking for or any tips for debugging this?
Thanks!
I figured it out! One of the many things happening asynchronously was the creation of a certificate. I traced it down to this:
signedCert = new X509Certificate2(cert, "notasecret", X509KeyStorageFlags.Exportable);
This code works fine when called from my azure website or my tests, but kills the webjob process completely without throwing an exception! For example, the WriteLine in the exception handler below never gets called:
X509Certificate2 signedCert;
try
{
signedCert = new X509Certificate2(cert, "notasecret", X509KeyStorageFlags.Exportable);
}
catch (Exception ex)
{
// We never get here! Argh!
Console.WriteLine("Exception converting cert: " + ex);
throw;
}
Extremely time consuming and frustrating. Unlike the diagnosis, the fix is simple:
signedCert = new X509Certificate2(
cert,
"notasecret",
X509KeyStorageFlags.Exportable |
X509KeyStorageFlags.MachineKeySet |
X509KeyStorageFlags.PersistKeySet);

MsDeploy remoting executing manifest twice

I have:
Created a manifest for msdeploy to:
Stop, Uninstall, Copy over, Install, and Start a Windows service.
Created a package from the manifest
Executed msdeploy against the package against a remote server.
Problem: It executes the entire manifest twice.
Tried: I have tinkered with the waitInterval and waitAttempts thinking it was timing out and starting over, but that hasn't helper.
Question: What might be making it execute twice?
The Manifest:
<sitemanifest>
<runCommand path="net stop TestSvc"
waitInterval="240000"
waitAttempts="1"/>
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<dirPath path="C:\msdeploy\TestSvc\TestSvc\bin\Debug" />
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<runCommand path="net start TestSvc"
waitInterval="240000"
waitAttempts="1"/>
</sitemanifest>
The command issued to package it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:manifest=c:\msdeploy\custom.xml
-dest:package=c:\msdeploy\package.zip
The command issued to execute it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:package=c:\msdeploy\package.zip
-dest:auto,computername=<computerNameHere>
I am running as a domain user who has administrative access on the box. I have also tried passing credentials - it is not a permissions issue, the commands are succeeding, just executing twice.
Edit:
I enabled -verbose and found some interesting lines in the log:
Verbose: Performing synchronization pass #1.
...
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
After these lines, files aren't copied the first time, but are copied the second time
...
Verbose: The dependency check 'DependencyCheckInUse' found no issues.
Verbose: Received response from agent (HTTP status 'OK').
Verbose: The current synchronization pass is missing stream content for 2 objects.
Verbose: Performing synchronization pass #2.
...
High Level
Normally I deploy a freshly built package with newer bits than are on the server.
During pass two, it duplicates everything that was done in pass one.
In pass 1, it will:
Stop, Uninstall, (delete some log files created by the service install), Install, and Start a Windows service
In pass 2, it will:
Stop, Uninstall, Copy files over, Install, and Start a Windows service.
I have no idea why it doesn't copy over the files in pass 1, or why pass 2 is triggered.
If I redeploy the same package instead of deploying fresh bits, it will run all the steps in pass 1, and not run pass 2. Probably because the files have the same time stamp.
There is not enough information in the question to really reproduce the problem to give a specific answer... but there are several things to check/change/try to make this work:
runCommand needs specific privileges
waitInterval="240000" and waitAttempt="1" (double quotes instead of single quotes)
permissions for the deployment service / deployment agent regarding directories etc. on the target machine
use tempAgent feature
work through the troubleshooting section esp. the logs and try the -whatif and -verbose options
EDIT - after the addition of -verboseoutput:
I see these possibilities:
Time
Both machines have a difference in time (either one of them is just a bit off or some timezone issue...)
Filesystem
If one of the filesystems is FAT this could lead to problems (timestamp resolution...)
EDIT 2 - as per comments:
In my last EDIT I wrote about timestamp because my suspicion is that something goes wrong when these are compared... that can be for example differring clocks between both machines (even a difference of 30 sec can have an impact) and/or some timezone issues...
I wrote about filesystem esp. FAT since the timestamp resolution of FAT is someabout 2 seconds while NTFS has much higher resolution, again this could have an impact when comparing timestamps...
From what you describe I would suggest the following workarounds:
use preSync and postSync for the Service handling parts (i.e. preSync for stop + uninstall and postSync for install + start) and do only the pure sync in the manifest or commandline
OR
use a script for the runCommand parts
EDIT 3 - as per comment from Merlyn Morgan-Graham the result for future reference:
When using the runCommand provider, use batch files. For some reason this made it stop running two passes.
The problem with this solution is that one can't specify the installation directory of the service via a SetParameters.xml file (same for dontUseCommandExe / preSync / postSync regarding SetParameters.xml).
EDIT 4 - as per comment from Merlyn Morgan-Graham:
The timeout params apply to whether to kill that specific command, not to the closing of the Windows Service itself... in this case it seems that the Windows Service takes rather long to stop and thus only the runCommands get executed without the copy/sync and a new try for the whole run is initiated...
I had the same problem, but I don't make package.zip file.
I perform synchronization directly in one step.
The preSync/postSync solution helped me a lot and there is no need to use manifest files.
You can try the following command in your case:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-preSync:runCommand="net stop TestSv && C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe",waitInterval=240000,waitAttempts=1
-source:dirPath="C:\msdeploy\TestSvc\TestSvc\bin\Debug"
-dest:auto,computername=<computerNameHere>
-postSync:runCommand="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe && net start TestSvc",waitInterval=240000,waitAttempts=1
"-verb:sync" parameter means you synchronize data between a source and a destination. In your case your case, first time you perform synchronization between the "C:\msdeploy\TestSvc\TestSvc\bin\Debug" folder and the "package.zip". Plus, you are using manifest file, so when you perform second synchronization between the "package.zip" and the destination "computername", msbuild uses previously provided manifest twice for the destination and for the source, so each manifest operation runs twice.
I used the && trick to perform several commands in one command line.
Also, in my case, I had to add timeout operation to be sure the service were completely stopped ("ping -n 30 127.0.0.1 > nul").

Resources