Geoserver Audit Logging stopped working - ubuntu-14.04

Problem: The Geoserver (2.8.0) which is installed in an Ubuntu 14.04 VM stopped creating the audit logs suddenly.
Background: A couple of months ago I followed the instructions at Geoserver Training - Logging all requests on Geoserver to enable audit logging in Geoserver. The process was successful allowing me to parse the logs using ElasticSearch, Logstash, and Kibana to get insights on service usage. Reviewing the analytics recently showed no Geoserver activity for a significant amount of time which suggested that the audit logs had potentially a problem. I checked the audit log generation which showed that no logs had been created for a significant amount of time (i.e. weeks).
Audit logs configuration:
The configuration that I included in the monitor.properties file is following:
audit.enabled=true
audit.path=/var/lib/tomcat7/webapps/geoserver/data/logs
audit.roll_limit=100000
The configuration that I included in the header.ftl file is following:
# start time,url,error flag,total time,response length,services,version,operation,resources,query,response content type
The configuration that I included in the content.ftl file is following:
${startTime?datetime?iso_utc_ms},${remoteAddr!""},<#if error??>failed<#else>success</#if>,${totalTime},${responseLength?c},${service!""},${owsVersion!""},${operation!""},${resourcesList!""},${queryString!""}",${responseContentType!""}
Has anyone had a similar issue in the past?
I appreciate your time and effort.

This did not have to do with Geoserver functionality after all. The solution was to change the privileges (i.e. provide writing privileges to the user writing the logs (tomcat)) for the folder where the audit logs are created. Somehow (under investigation...); the privileges had been changed.

Related

azcopy from Google Cloud to Azure just hangs

I am trying to use azcopy to copy from Google Cloud to Azure.
I'm following instructions here and I can see in the logs generated that the connectivity to GCP seems fine, the SAS token is fine and it creates the container fine (see it appear in Azure Storage Explorer) but then it just hangs. Output is:
INFO: Scanning...
INFO: Authenticating to source using GoogleAppCredentials
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
If I look at the log it shows:
2022/06/01 07:43:25 AzcopyVersion 10.15.0
2022/06/01 07:43:25 OS-Environment windows
2022/06/01 07:43:25 OS-Architecture amd64
2022/06/01 07:43:25 Log times are in UTC. Local time is 1 Jun 2022 08:43:25
2022/06/01 07:43:25 ISO 8601 START TIME: to copy files that changed before or after this job started, use the parameter --include-before=2022-06-01T07:43:20Z or --include-after=2022-06-01T07:43:20Z
2022/06/01 07:43:25 Authenticating to source using GoogleAppCredentials
2022/06/01 07:43:26 Any empty folders will not be processed, because source and/or destination doesn't have full folder support
As I say, no errors around SAS token being out of date, or can't find the GCP credentials, or anything like that.
It just hangs.
It does this if I try and copy a single named file or a recursive directory copy. Anything.
Any ideas, please?
• I would suggest you to please check the logs of these AzCopy transactions for more details on this scenario. To collect the logs and analyze them, you will have to check the logs stored in ‘%USERPROFILE%\.azcopy’ directory on Windows. AzCopy creates log and plan files for every job, so you will have to investigate and troubleshoot any potential problems regarding this scenario by analyzing them.
• As you are encountering hang issues with the AzCopy utility during a job execution for transferring files, it might be a network fluctuation issue, timeout issue or server busy issues. Please do remember that AzCopy retries upto 20 times in these cases and usually the retry process succeeds. Try to look for the errors in the logs that are near ‘UPLOADFAILED, COPYFAILED, or DOWNLOADFAILED’.
• The following command will get all the errors with ‘UPLOADFAILED’ status from the concerned log file: -
Select-String UPLOADFAILED .\<CONCERNEDLOGFILE GUID>.log
To show the jobs by status relating to the job ID, kindly execute the below command: -
azcopy jobs show <job-id> --with-status=Failed
• Execute the AzCopy job execution command from your local system with ‘--no-check-certificate’ argument which will ensure that there are no certificate checks for the system certificates at the receiving end. Ensure that the root certificates for the network client device or software are correctly installed on your local system as they are the only ones to block your jobs while transferring files from on-premises to Azure.
Also, once the job starts initially without any parameters, then when it hangs, just press CTRL+C to kill the process and then immediately check the logs in AzCopy as well as in the event viewer for any system issues. It will help you know the exact issue regarding this. It really shows why the process failed and got hung.
For more information, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure
https://github.com/Azure/azure-storage-azcopy/issues/517
Frustratingly, having many calls with Microsoft support, on demoing this to another person the exact same command with exact same SAS token etc that was previously failing just started to work.
I hate problems that 'fix themselves' as it means it will likely occur again.
Thanks to KartikBhiwapurkar-MT for a detailed response too.

Shipping logs from network share using Filebeat on Windows

The problem statement: I have an application running on Windows. I want to ship logs files from this application to ELK fronted by Kafka.
The challenge: This application writes a lot of process metadata to disk under a directory location. This information is important for the application's recovery and hence is stored on a network storage to support DR. The application also writes logs to the same directory location and we do not have the ability to separate the logs from the other process metadata. As a result logs are written to network share.
I want to ship the logs to Elastic. We typically use beats to do this. However, Filebeat does not recommending shipping logs from network storage on Windows. Ref: https://www.elastic.co/guide/en/beats/filebeat/7.11/filebeat-network-volumes.html. I have also read various git issues and SO posts where people have complained about Filbeat stopping harvesting on rollover.
Since this is a network share, I was also not able to create a symlink or a Junction link to trick my application to write the logs to the hard disk.
Has anyone solved this issue?
P.S.: I also read somewhere that logstash has better handling of files on network share. However, I do not need logstash and would like to avoid it if possile. Also, logstash official documentation mentions that reading files from NFS is only occasionally tested. It is not thoroughly tested.

ArangoDB - Help diagnosing database corruption after system restart

I've been working with Arango for a few months now within a local, single-node development environment that regularly gets restarted for maintenance reasons. About 5 or 6 times now my development database has become corrupted after a controlled restart of my system. When it occurs, the corruption is subtle in that the Arango daemon seems to start ok and the database structurally appears as expected through the web interface (collections, documents are there). The problems have included the Foxx microservice system failing to upload my validated service code (generic 500 service error) as well as queries using filters not returning expected results (damaged indexes?). When this happens, the only way I've been able to recover is by deleting the database and rebuilding it.
I'm looking for advice on how to debug this issue - such as what to look for in log files, server configuration options that may apply, etc. I've read most of the development documentation, but only skimmed over the deployment docs, so perhaps there's an obvious setting I'm missing somewhere to adjust reliability/resilience? (this is a single-node local instance).
Thanks for any help/advice!
please note that issues like this should rather be discussed on github.

google_accounts_daemon[1140]: Full path required for exclude: net[4026532634]

I issued the command journalctl -xe on my VM Centos7 in Google Cloud and I got this error:
google_accounts_daemon[1140]: Full path required for exclude: net[4026532634]
Does anyone have an idea on this?
Response to the bug report I filed on Google Cloud Platform:
The accounts daemon is activated every ~90 seconds to ensure that expired SSH keys are removed from the guest. At that time, the guest will attempt to create/update/remove users based on the non-expired keys in metadata. If some user isn't getting set up properly (maybe there is a problem adding the user to one of the listed groups?) it might result in something getting called every couple minutes.
Google Cloud Platform Issue #444
I did not take more time to debug the issue, since the only problem is the volume of log messages.
The log messages stopped when I upgraded the instances to CentOS 7.4.
Yes, I am running a docker inside my VM. ^_^

Jhipster app logging to remote ELK (elastic stack)

I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.

Resources