Sentry on supervisor - Error no log file - ubuntu-14.04

I'm trying to setup a sentry server (getsentry.com). I can run sentry just fine manually as the sentry user. But when I try and run it in supervisor via 'supervisorctl tail sentry-web' I get 'sentry-web: ERROR (no log file)'
The supervisors sentry program specifies using syslog for both regular and error log. I have also tried specifying absolute paths to log files both in the home directory for the sentry user, and under /var/log
OS: Ubuntu 14.04
Supervisor: 3.0b2
Sentry: 7.5.4

You wont be able to use the log commands (or fg iirc) when using syslog for output (which we recommend for simplicity).
An absolute path should work fine, but you'll need to confirm that the user supervisor is running Sentry as has write access to the directory.
If you use syslog, you should see output in /var/log/syslog
A good way to test things is to run the command as the user in the foreground (outside of supervisor). Also by default our log verbosity isn't very high (we actually don't do much informational/debug logging at this time), so you're fairly limited on the output you'll get. There is the possibility to change it, but it utilizes Django's LOGGING configuration and we don't yet document this/expose it in a user friendly way.

Related

Freeradius problem reading google_authenticator secrets Centos 7

I have a freeradius server setup with google authenticator to provide a basic working multi-factor setup.
Everything works when I run radiusd in debug mode as root. If I start it as a service, logons fail and this messages is recorded when processing messages:
radiusd(pam_google_authenticator)[1115]: Failed to read "/home/user#domain.com/.google_authenticator" for "user#domain.com"
I think this must be a permissions issue since it works fine when run as root.
I don't really want to edit the permissions on each secret file for every user.
I have tried specifying root in
/etc/raddb/radiusd.conf
user = root group = root
but still the service fails unless run from the command line as root. Does anyone have a nice elegant solution to this conundrum?
I think you should check out your systemd service file for radiusd. It might look something like:
https://github.com/ipfire/ipfire-3.x/blob/master/freeradius/systemd/freeradius.service
You can add User= and Group= in the [Service] section of the .service file if needed. See
https://unix.stackexchange.com/questions/347358/how-to-change-service-user-in-centos-7
and
https://serverfault.com/questions/806617/configuring-systemd-service-to-run-with-root-access
It would be a good idea to put the contents of the .service file for radiusd in your post.

Nagios is not reading values from a file of another server

i have written a bash script. If I run this script manually on same server then its output is
CRITICAL:Something really bad is happening on server.CPU load of Process id: 11109
for user: root with command: java is 76.5
Then I configured its alert on nagios, and nagios is reading its output like
CRITICAL:Something really bad is happening on server.CPU load of Process id:
for user: with command: is
Means values are missing driven from file.
That's most likely happening because generally Nagios uses a user "nagios" or "nrpe" to execute the script plugins and that user is not able to view all processes like root does or does not have the permission to read the file you are asking it to read. You should give the nrpe user permission to read via "sudo" to solve your issue. Please note that in order to run sudo with a user that does not log in(as the Nagios user), you also nees to commebt out the Require tty parameter from /etc/sudoers file.

Where is the log file using Production profile with NServiceBus GenericHost and default log4net settings when installed as a service?

I have a very simple NServiceBus.Host.exe application that is using the default logging and the Production profile. According to the documentation, this should result in an appending file log that should appear in the same folder as the EXE. However, when I run the application as a service, the log file doesn't appear in the same folder as the EXE, and thus far I've been unable to locate it at all. The service is running as Local System. Do I need to run it as a user account and look for the file in the AppData folder somewhere? Is it under c:\windows somewhere? Where is it and is there a way for me to have it actually log to a file in the same folder as the EXE as advertised?
Update:
Using ProcMon and ProcExp from SysInternals, I can see that there is no attempt to create any log file in the folder where my EXE exists, nor are there any file permission errors while trying to create a log file anywhere, at least not from the PID of the service (if for some reason log4net spins up another process to do this work then I might have missed it).
It turns out that the service wasn't actually running in the Production profile. I had for some reason gotten it into my head that services would run in the production profile by default, while running it in interactive mode would use Lite by default. Not so - the service will use the Lite profile unless you specify otherwise. I changed my command to install the service from:
NServiceBus.Host.exe /install /displayName:MyService
to
NServiceBus.Host.exe /install /displayName:MyService NServiceBus.Production
and this fixed the issue.

Linux - Subversion - post-commit hook not executing

I am running Arch linux. I have installed Subversion and set it up for use with https everything seems to working fine, with the exception of my hooks.
I have one hook, it is path/to/repo/hooks/post-commit.
It is executable.
I have included a logging statement with: echo "Complete." >> /path/hook.log
When executed as the http user from the command line the script works fine, including the log statement.
When I commit changes I do not see the addition to the log or any of the actions from the rest of the script.
What might I have mis-configured?
Are there any logs to check for this?
Maybe you need to set the proper permission to the /path/hook.log, so as the user that execute the svn-hook, could writte into that file.
But maybe you could give us a litle more information about that hook.

How should I log from a non-root Debian Linux daemon?

I'm writing a new daemon, which will be hosted on Debian Linux.
I've found that /var/log has root only write permissions, so my daemon cannot write log files there.
However, if it writes there, it appears it will gain automatic log rotation, and also work as a user might expect.
What is the recommended way for a daemon to write log entries that appear in /var/log, without having to be run as root?
The daemon is a webserver, so the log traffic will be similar to Apache.
You should create a subdirectory like /var/log/mydaemon having the daemon's user ownership
As root, create a logfile there and change the files owner to the webserver user:
# touch /var/log/myserver.log
# chown wwwuser /var/log/myserver.log
Then the server can write to the files if run as user wwwuser. It will not gain automatic log rotation, though. You have to add the logfile to /etc/logrotate.conf or /etc/logrotate.d/... and make your server reopen the logfile when logrotate signals it should.
You might also use syslog for logging, if that fit's your scenario better.
Two options:
Start as root, open the file, then drop permissions with setuid. (I don't remember the exact system calls for dropping permissions.) You'll have to do this anyway if you want to bind to TCP port 80 or any port below 1024.
Create a subdirectory like /var/log/mydaemon having the daemon's user ownership, as WiseTechi said.
Files under /var/log aren't automatically rotated; instead, rotation is controlled by /etc/logrotate.conf and files under /etc/logrotate.d.
use the "logger" command
http://linux.die.net/man/1/logger

Resources