I want to log the instances during the application run in the generated log files. For testing I have added the following code in beforeSave() of MOrder.
log.log(Level.SEVERE, " //SEVERE Log details)");
log.log(Level.WARNING, "//WARNING Log details)");
I have run the server and made a .jnlp client installation. While creating Sales Order the log details are displayed on the server but not traced in the generated log file.
In Preference : Trace Level is WARNING and Trace file is true
In ADempiere server Management(Web view), The Trace Level is warning and I could trace the log details in file while I created the Sales Order using web window.
Is there anything I missed to trace the log details in application level?
ADempiere software structure are divided in 2 pieces.:
Client :
Desktop with jnlp
Swing_product.zip
Web interface (zkwebui)
Server:
Document processor
Accounting processor
Scheduler and workflow processor
DB Transactions and jboss app.
Everything happens on system still logged on server logs, %Adempiere_Home%/logs
Related
I run a continuous WebJob in my WebApp. What I have found is that it can randomly restart.
I've checked my web app settings and "Always On" is turned on. I have no triggers that can cause a reboot.
This is an empty web app, creating from scratch. All I have done is just published my continuous WebJob.
How can I prevent this random reboots?
As I see from App Insight it restarts after 10 minutes from the first run
2/9/2021, 12:46:45 PM - TRACE
Checking for active containers
Severity level: Information
2/9/2021, 12:46:45 PM - TRACE
Job host started
Severity level: Information
2/9/2021, 12:46:45 PM - TRACE
No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
Severity level: Warning
2/9/2021, 12:46:45 PM - TRACE
Starting JobHost
Severity level: Information
2/9/2021, 12:36:44 PM - TRACE
2: Change Feed Processor: Processor_Container2 with Instance name: 4b94336ff47c4678b9cf4083a60f0b3bf1cd9f77ce7d501100a9d4e60bd87e8e has been started
Severity level: Information
2/9/2021, 12:36:37 PM - TRACE
1: Change Feed Processor: Processor_Container1 with Instance name: 4b94336ff47c4678b9cf4083a60f0b3bf1cd9f77ce7d501100a9d4e60bd87e8e has been started
Severity level: Information
2/9/2021, 12:36:32 PM - TRACE
Checking for active containers
Severity level: Information
2/9/2021, 12:36:32 PM - TRACE
Job host started
Severity level: Information
Kindly review the Jobs and logs to isolate the issue further:
For continuous WebJobs - Console.Out and Console.Error are routed to the "application logs", they will show up as file or blob depends on your configuration of the application logs (similar to your WebApp).
Kindly check this document for more details - https://github.com/projectkudu/kudu/wiki/WebJobs#logging
I have seen cases, where having unnecessary app settings on the configuration blade of WebJobs on the portal caused reboots.
Kindly identify and remove unnecessary app settings as required (as a test).
Also, kindly see if this setting is present ‘**WEBJOBS_RESTART_TIM**E ‘-Timeout in seconds between when a continuous job's process goes down (for any reason) and the time we re-launch it again (Only for continuous jobs).
On the App Service, In the left navigation, click on Diagnose and solve problems – Checkout the tile for “**Diagnostic Tools**” > “Availability and Performance” & "Best Practices". /Review the WebJob details (screenshot below).
Just to isolate, kindly see if setting singleton helps. If a continuous job is set as singleton it'll run only on a single instance opposed to running on all instances. By default, it runs on all instances.
{
"is_singleton": true
}
Refer this doc- https://github.com/projectkudu/kudu/wiki/WebJobs-API#set-a-continuous-job-as-singleton
P.S. To benefit the community/copying the answer from our discussion on Q&A thread.
I decided to post this question because, I have ran out of debugging ideas, just ideas are golden since I know it can be difficult to help debugging a virtual instance through here (debugging code is hard enough jaja). Anyway, I have created a virtual machine in Compute engine , I created a logs file that I populate, for example, with this command in a python script, let's call it logging.py:
import logging
logging.basicConfig(filename= 'app.log' , level = logging.INFO , format = ' %(asctime)s - %(name) - %(levelname)s - %(message)s')
logging.info('Some message ' + str(type(variable)))
everytime I use python3 logging.py , the app.log is effectively populated. ( Logging.py and app.log are in the same directory the /home/username/ folder )
I want stackdriver to show this log in the logging viewer everytime it's written, so , I installed the stackdriver agent as follows, in the virtual machine command line:
$ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
$ sudo bash install-logging-agent.sh
No errors that I see are delivered here, in fact, you can see here the messages obtained
Messags on the stackdriver viewer:
After this, I proceed to create a .conf file that I create in /etc/google-fluentd/config.d/app.conf
with this parameters
<source>
type tail
format none
path /home/username/app.log
pos_file /var/lib/google-fluentd/pos/app.pos
read_from_head true
tag whatever-tag
</source>
After that is created, I launch sudo service google-fluentd restart.
Aftert I execute, python3 logging.py , no logs are added to stack drivers logging viewer.
So, where might Have I gone wrong?
Things I have tried/checked:
-Have more than 13 gygabytes of RAM available
-If I run logger "some message" on the command line, I effectively add a log with "some message" to the log viewer
-If I run
ps ax | grep fluentd
I obtain :
3033 ? Sl 0:09 /opt/google-fluentd/embedded/bin/ruby /usr/sbin/google-fluentd --log /var/log/google-fluentd/google-fluentd.log --no-supervisor
3309 pts/0 S+ 0:00 grep --color=auto fluentd
-Both my user, and the service account I use, have logger admin permission in IAM roles.
-This is the documentation I have based myself on:
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419
https://cloud.google.com/logging/docs/agent/configuration?hl=es-419
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3
https://cloud.google.com/logging/docs/agent/installation
-If I run sudo service google-fluentd status , the agent appears active.
-My instance hass access, to all the apis. It's an n1-standard-4 (4 vCPUs, 15 GB of memory) using ubuntu linux 18:04
So, what else can I check to debug this? I'm out of ideas here , hope I'm not being an idiot here :(
Based on my understanding, I think that you looking for the following fluentd resource types:
generic_node
“A generic node identifies a machine or other computational resource for which no more specific resource type is applicable. The label values must uniquely identify the node.”
generic_task
“A generic task identifies an application process for which no more specific resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task.”
The source of my information has been found here
This document explain how to send logs from your application in different ways:
Cloud Logging API
Cloud Logging Agent
Generic fluentd
As you mentioned having installed fluentd, let me provide more focused documentation about Cloud Logging Agent. I also found some python Client Library documentation that you may be interested.
Finally, I found a nginx/apache use-case guide that you may use as reference.
For some reason, if I change the directory to which both the .conf file points, and the directory where the logg is to /var/logs/ , being the final path as /var/logs/app.logs, it does work correctly. Possibly there is a configuration issue, causing the logging agent to only capture logs in specific predetermined folders, or a permissions issue that stops it from working if the log is in the username directory.
I found this solution, however, by chance(random testing basically.
). Did not find anything in the main articles that are supposed to teach me how to configure the logging agent, that could point me in the right direction, being those articles this ones,
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419 https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419 https://cloud.google.com/logging/docs/agent/configuration?hl=es-419 https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/installation
If I needed it to work in my username directory, it's not clear just by checking this articles how to do it,what configuration file I would need to change or where to start, so I recommend to google to improve that aspect of the docs.
This documentation you have sent https://docs.fluentd.org/quickstart is pretty interesting, maybe I can find the explanation there, thank you for your help.
I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.
I am desperately trying to debug an error 500 only when I try to update an object from my xamarin.Forms offline DB to Azure. I am using Azure Mobile Client.
I set all the logging to ON in azure, then I downloaded the log. I can see the generic error, but nothing useful.
<failedRequest url="https://MASKED:80/tables/Appel/9A3342A2-0598-4126-B0F6-2999B524B4AE"
siteId="Masked"
appPoolId="Masked"
processId="6096"
verb="PATCH"
remoteUserName=""
userName=""
tokenUserName="IIS APPPOOL\Masked"
authenticationType="anonymous"
activityId="{80000063-0000-EA00-B63F-84710C7967BB}"
failureReason="STATUS_CODE"
statusCode="500"
triggerStatusCode="500"
timeTaken="625"
xmlns:freb="http://schemas.microsoft.com/win/2006/06/iis/freb"
>
The table that failed is the only one I extend with some virtual runtime calculated field of navigation field. But I add the [JsonIgnore] to stop AzureService to create field in the local DB (that work) or send it on the wire to the server. But I always got the 500 error, not exception when debugging the c# Azure backend too.
How I can find the stack trace or the "deep" reason for this 500 error in my backend?
For C# Mobile App backend, you could add the following code in the ConfigureMobileApp method of your Startup.MobileApp.cs file for including error details and return to your client side.
config.IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always;
You could just capture the exception in your mobile application or leverage fiddler to capture the network traces when invoking the PATCH operation to retrieve the detailed error message.
Moreover, you are viewing the Failed Request Traces log, you need to check the Application logs. Details you could follow Enable diagnostics logging for web apps in Azure App Service.
We are sending all trace logs to our NLog logger. (with a trace listener)
Nlog is configured to work with DryIoC.
Locally this works perfect, however, in Azure (web app), the first trace message is logged, before we could create our DryIoc container. Even a PreAppStartMethodAttribute does not help, as the trace log has occurred even before the PreAppStartMethodAttribute.
Is there a way to do some initialization tasks before azure logs it's first trace message?
I found a work-around by using an async wrapper that prevents NLog from flushing the data until the container/configuration has been initiliazed.