Continuous WebJob randomly restarts - azure

I run a continuous WebJob in my WebApp. What I have found is that it can randomly restart.
I've checked my web app settings and "Always On" is turned on. I have no triggers that can cause a reboot.
This is an empty web app, creating from scratch. All I have done is just published my continuous WebJob.
How can I prevent this random reboots?
As I see from App Insight it restarts after 10 minutes from the first run
2/9/2021, 12:46:45 PM - TRACE
Checking for active containers
Severity level: Information
2/9/2021, 12:46:45 PM - TRACE
Job host started
Severity level: Information
2/9/2021, 12:46:45 PM - TRACE
No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
Severity level: Warning
2/9/2021, 12:46:45 PM - TRACE
Starting JobHost
Severity level: Information
2/9/2021, 12:36:44 PM - TRACE
2: Change Feed Processor: Processor_Container2 with Instance name: 4b94336ff47c4678b9cf4083a60f0b3bf1cd9f77ce7d501100a9d4e60bd87e8e has been started
Severity level: Information
2/9/2021, 12:36:37 PM - TRACE
1: Change Feed Processor: Processor_Container1 with Instance name: 4b94336ff47c4678b9cf4083a60f0b3bf1cd9f77ce7d501100a9d4e60bd87e8e has been started
Severity level: Information
2/9/2021, 12:36:32 PM - TRACE
Checking for active containers
Severity level: Information
2/9/2021, 12:36:32 PM - TRACE
Job host started
Severity level: Information

Kindly review the Jobs and logs to isolate the issue further:
For continuous WebJobs - Console.Out and Console.Error are routed to the "application logs", they will show up as file or blob depends on your configuration of the application logs (similar to your WebApp).
Kindly check this document for more details - https://github.com/projectkudu/kudu/wiki/WebJobs#logging
I have seen cases, where having unnecessary app settings on the configuration blade of WebJobs on the portal caused reboots.
Kindly identify and remove unnecessary app settings as required (as a test).
Also, kindly see if this setting is present ‘**WEBJOBS_RESTART_TIM**E ‘-Timeout in seconds between when a continuous job's process goes down (for any reason) and the time we re-launch it again (Only for continuous jobs).
On the App Service, In the left navigation, click on Diagnose and solve problems – Checkout the tile for “**Diagnostic Tools**” > “Availability and Performance” & "Best Practices". /Review the WebJob details (screenshot below).
Just to isolate, kindly see if setting singleton helps. If a continuous job is set as singleton it'll run only on a single instance opposed to running on all instances. By default, it runs on all instances.
{
"is_singleton": true
}
Refer this doc- https://github.com/projectkudu/kudu/wiki/WebJobs-API#set-a-continuous-job-as-singleton
P.S. To benefit the community/copying the answer from our discussion on Q&A thread.

Related

Why is App Insights only showing Warning and Error tracing levels? What about lower levels?

I have a .Net Framework 4.5.1 app service that has the following code:
Console.WriteLine("🔥 Console.WriteLine");
Trace.WriteLine("🔥 Trace.WriteLine");
Trace.TraceInformation("🔥 Trace.TraceInformation");
Trace.TraceWarning("🔥 Trace.TraceWarning");
Trace.TraceError("🔥 Trace.TraceError");
I would like to be able to see logs that we log into Azure when the app is in production. So I enabled Application Service Logging for my web app:
As you can see, I have the logging level set to verbose because we want to see everything. I also have Application Insights turned on for my app. But when I check the logs, all I see are the logs at the warning level and higher.
I checked in the "logs" tab:
I also checked in my blob storage. I see logs, but they're all on a warning level or higher.
There was one way I got this to work, and that's using the feature that saves logs to a file system. But that feature turns itself off after 12 hours, and the file might get too big anyways, and I'd like to use Azure Insights since it has a nicer interface.
How can I view all my logs my application outputs over a long period of time?
I usually just remove all logging configuration from config files like appsettings.json and use the ApplicationInsightsLoggerProvider type specifier in Startup.cs, like so:
public void ConfigureServices(IServiceCollection services)
{
services.AddLogging(conf =>
{
conf.AddApplicationInsights();
conf.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.Information);
conf.AddFilter<ApplicationInsightsLoggerProvider>("Microsoft", LogLevel.Warning);
conf.AddFilter<ApplicationInsightsLoggerProvider>("System", LogLevel.Warning);
});
...
}
It's documented here.
If you have various trace levels in your web app but are only interested in having certain levels of logs sent to the logging endpoint, you can set a filter for the minimum level in your application settings under Configuration. By default, even without the app setting, the minimum trace level is set to Warning.
How to set App setting for AppServiceAppLogs level
The application setting name will be APPSERVICEAPPLOGS_TRACE_LEVEL and the value will be the minimum level (ie. Error, Warning, Verbose, etc.). Refer to TraceLevel for more info.
Links
https://learn.microsoft.com/en-us/azure/app-service/troubleshoot-diagnostic-logs
https://azure.github.io/AppService/2020/08/31/AzMon-AppServiceAppLogs.html (Image Source)

HTTP Error 500.30 - ANCM In-Process Start Failure with newly created app service

we are created new development environment so I cloned a current working app service into a new one and changed the configurations and deployed same code but the new app service is returning HTTP Error 500.30 - ANCM In-Process Start Failure
after trying the console for more details that's what I get, I don't think its related to runtime identifier because same code runs on different exact app services
The dreaded 500.3x ACNM error can mean different things, so I'm going to assist you in pinpointing those things.
My recommendation:
Go to Azure Portal > Your App Service > development tools
Open console.
Screen should look like this:
Console Screen Azure
Type in (YourWebAppName).exe
What this will do, is show error messages that are relevant to your startup issue.
Also, some information regarding errors can be seen here:
https://learn.microsoft.com/en-us/aspnet/core/test/troubleshoot-azure-iis?view=aspnetcore-3.1#app-startup-errors

Stackdriver-trace on Google Cloud Run failing, while working fine on localhost

I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.

Unable to update VM with nodejs app on Google App Engine

When I try to deploy from the gcloud CLI I get the following error.
Copying files to Google Cloud Storage...
Synchronizing files to [gs://staging.logically-abstract-www-site.appspot.com/].
Updating module [default]...\Deleted [https://www.googleapis.com/compute/v1/projects/logically-abstract-www-site/zones/us-central1-f/instances/gae-builder-vm-20151030t150724].
Updating module [default]...failed.
ERROR: (gcloud.preview.app.deploy) Error Response: [4] Timed out creating VMs.
My app.yaml is:
runtime: nodejs
vm: true
api_version: 1
automatic_scaling:
min_num_instances: 2
max_num_instances: 20
cool_down_period_sec: 60
cpu_utilization:
target_utilization: 0.5
and I am logged in successfully and have the correct project ID. I see the new version created in the Cloud Console for App Engine, but the error is after that it seems.
In the stdout log I see both instances go up with the last console.log statement I put in the app after it starts listening on the port, but in the shutdown.log I see "app was unhealthy" and in syslog I see "WARNING: never got healthy response from app, but sending /_ah/start query anyway."
From my experience with nodejs using Google Cloud App Engine, I see that "Timed out creating VMs" is neither a traditional timeout nor does it have to do with creating VMs. I had found that other errors were reported during the launch of the server --which happens to be right after VMs are created. So, I recommend checking console output to see if it tells you anything.
To see the console output:
For a vm instance, then go to /your/ vm instances and click the vm instance you want, then scroll towards the bottom and click "Serial console output".
For stdout console logging, go monitoring /your/ logs then change the log type dropdown from Request to be stdout.
I had found differences in the process.env when running locally versus in the cloud. I hope you find your solution too --good luck!

How to trace log details in the log file in ADempiere

I want to log the instances during the application run in the generated log files. For testing I have added the following code in beforeSave() of MOrder.
log.log(Level.SEVERE, " //SEVERE Log details)");
log.log(Level.WARNING, "//WARNING Log details)");
I have run the server and made a .jnlp client installation. While creating Sales Order the log details are displayed on the server but not traced in the generated log file.
In Preference : Trace Level is WARNING and Trace file is true
In ADempiere server Management(Web view), The Trace Level is warning and I could trace the log details in file while I created the Sales Order using web window.
Is there anything I missed to trace the log details in application level?
ADempiere software structure are divided in 2 pieces.:
Client :
Desktop with jnlp
Swing_product.zip
Web interface (zkwebui)
Server:
Document processor
Accounting processor
Scheduler and workflow processor
DB Transactions and jboss app.
Everything happens on system still logged on server logs, %Adempiere_Home%/logs

Resources