See user id in linux journalctl - linux

I use journalctl -f to see the logs written by the processes, but it does not print the user who initiated the process writing to syslog.
Is there any option we can provide to journalctl in order to print the user id?
Thanks.

If you run journalctl -o verbose you will see the complete log record for each entry, which will look something like:
Fri 2019-08-02 20:02:11.307673 EDT [s=e328b42f9ccd4ef28cc946dba525b34c;i=12377a1;b=299ec3a7a9e545f3ab77225c045aee0c;m=7a7d1960ef;t=58f2b2fc43900;x=ceff52112a8146ae]
_TRANSPORT=journal
_UID=1000
_GID=1000
_CAP_EFFECTIVE=0
_AUDIT_LOGINUID=1000
_SYSTEMD_OWNER_UID=1000
_SYSTEMD_SLICE=user-1000.slice
_SYSTEMD_USER_SLICE=-.slice
_BOOT_ID=299ec3a7a9e545f3ab77225c045aee0c
_MACHINE_ID=39332780e5924d6ba0bdf775223941f6
_HOSTNAME=madhatter
PRIORITY=6
SYSLOG_FACILITY=3
CODE_FILE=../src/core/job.c
CODE_LINE=594
CODE_FUNC=job_log_begin_status_message
SYSLOG_IDENTIFIER=systemd
MESSAGE=Starting GNOME Terminal Server...
JOB_ID=1446
JOB_TYPE=start
USER_UNIT=gnome-terminal-server.service
USER_INVOCATION_ID=188bcf32f532490c8ce5ec486895f9d0
MESSAGE_ID=7d4958e842da4a758f6c1cdc7b36dcc5
_PID=3329
_COMM=systemd
_EXE=/usr/lib/systemd/systemd
_CMDLINE=/usr/lib/systemd/systemd --user
_AUDIT_SESSION=3
_SYSTEMD_CGROUP=/user.slice/user-1000.slice/user#1000.service/init.scope
_SYSTEMD_UNIT=user#1000.service
_SYSTEMD_USER_UNIT=init.scope
_SYSTEMD_INVOCATION_ID=5537b4a00b5946ce9f3b2b664c3d10e8
_SOURCE_REALTIME_TIMESTAMP=1564790531307673
Take a look at man journalctl for more information. E.g., there is a -o json if you want to programtically process log lines in some fashion.

Related

Monitoring the System Log File via Bash Script

I am currently using the following to read the system log:
journalctl -u <service name> | tail -n1
However I have a need to monitor the system log live, as changes come in (in a bash script), instead of just looping through the log file.
What is the best way to do this? I did some research to where the journalctl command is reading from, and it seems that the system logs are unreadable (or at least when I attempted with cat.
Any suggestions would be much appreciated!
journalctl tool has a -f flag which enables printing the contents of log file as soon as it is changed. Use it like this:
$ journalctl -u <service name> -f

How to add timstamps to crond's native logs?

I know this has been asked countless times but I am looking for a solution that uses crond's native log function. I do not want to pipe the output of each cron and prepend the timestamp.
I am launching crond like this:
crond -L /var/log/cron.log -f
the logs are like this:
crond: crond (busybox 1.30.1) started, log level 8
crond: USER root pid 16 cmd echo "hello"
crond: USER root pid 18 cmd echo "hello"
crond: USER root pid 19 cmd echo "hello"
I'd like to add the timestamp before the line. I do not want to add some stdout command to each individual cron and prepend the date.
Maybe I could watch the file and append to each new line or something? How do I get access to crond's stream and modify it?
I believe that the answer is that it's not possible to modify the crond output file.
The actual implementation detail of the cron do not make it easy to control the log file for individual jobs. Also, the crond is running as root, which will make it hard to user jobs to change the file. Trying to change the file, while crond is running will likely result in problems.
Consider instead the following option
Write a process that will tail -f the log file, and create a new log file, with each line prefixed by the timestamp.
Run the process at boot time.
tail -f /var/log/cron.log | while read x ; do echo "$(date) $x" ; done >> /var/log/cron-ts.log
Or configure to whatever format you need.

How to redirect output of systemd service to a file

I am trying to redirect output of a systemd service to a file but it doesn't seem to work:
[Unit]
Description=customprocess
After=network.target
[Service]
Type=forking
ExecStart=/usr/local/bin/binary1 agent -config-dir /etc/sample.d/server
StandardOutput=/var/log1.log
StandardError=/var/log2.log
Restart=always
[Install]
WantedBy=multi-user.target
Please correct my approach.
I think there's a more elegant way to solve the problem: send the stdout/stderr to syslog with an identifier and instruct your syslog manager to split its output by program name.
Use the following properties in your systemd service unit file:
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=<your program identifier> # without any quote
Then, assuming your distribution is using rsyslog to manage syslogs, create a file in /etc/rsyslog.d/<new_file>.conf with the following content:
if $programname == '<your program identifier>' then /path/to/log/file.log
& stop
Now make the log file writable by syslog:
# ls -alth /var/log/syslog
-rw-r----- 1 syslog adm 439K Mar 5 19:35 /var/log/syslog
# chown syslog:adm /path/to/log/file.log
Restart rsyslog (sudo systemctl restart rsyslog) and enjoy! Your program stdout/stderr will still be available through journalctl (sudo journalctl -u <your program identifier>) but they will also be available in your file of choice.
Source via archive.org
If you have a newer distro with a newer systemd (systemd version 236 or newer), you can set the values of StandardOutput or StandardError to file:YOUR_ABSPATH_FILENAME.
Long story:
In newer versions of systemd there is a relatively new option (the github request is from 2016 ish and the enhancement is merged/closed 2017 ish) where you can set the values of StandardOutput or StandardError to file:YOUR_ABSPATH_FILENAME. The file:path option is documented in the most recent systemd.exec man page.
This new feature is relatively new and so is not available for older distros like centos-7 (or any centos before that).
I would suggest adding stdout and stderr file in systemd service file itself.
Referring : https://www.freedesktop.org/software/systemd/man/systemd.exec.html#StandardOutput=
As you have configured it should not like:
StandardOutput=/home/user/log1.log
StandardError=/home/user/log2.log
It should be:
StandardOutput=file:/home/user/log1.log
StandardError=file:/home/user/log2.log
This works when you don't want to restart the service again and again.
This will create a new file and does not append to the existing file.
Use Instead:
StandardOutput=append:/home/user/log1.log
StandardError=append:/home/user/log2.log
NOTE: Make sure you create the directory already. I guess it does not support to create a directory.
You possibly get this error:
Failed to parse output specifier, ignoring: /var/log1.log
From the systemd.exec(5) man page:
StandardOutput=
Controls where file descriptor 1 (STDOUT) of the executed processes is connected to. Takes one of inherit, null, tty, journal, syslog, kmsg, journal+console, syslog+console, kmsg+console or socket.
The systemd.exec(5) man page explains other options related to logging. See also the systemd.service(5) and systemd.unit(5) man pages.
Or maybe you can try things like this (all on one line):
ExecStart=/bin/sh -c '/usr/local/bin/binary1 agent -config-dir /etc/sample.d/server 2>&1 > /var/log.log'
If for a some reason can't use rsyslog, this will do:
ExecStart=/bin/bash -ce "exec /usr/local/bin/binary1 agent -config-dir /etc/sample.d/server >> /var/log/agent.log 2>&1"
Short answer:
StandardOutput=file:/var/log1.log
StandardError=file:/var/log2.log
If you don't want the files to be cleared every time the service is run, use append instead:
StandardOutput=append:/var/log1.log
StandardError=append:/var/log2.log
We are using Centos7, spring boot application with systemd. I was running java as below. and setting StandardOutput to file was not working for me.
ExecStart=/bin/java -jar xxx.jar -Xmx512-Xms32M
Below workaround solution working without setting StandardOutput. running java through sh as below.
ExecStart=/bin/sh -c 'exec /bin/java -jar xxx.jar -Xmx512M -Xms32M >> /data/logs/xxx.log 2>&1'
Assume logs are already put to stdout/stderr, and have systemd unit's log in /var/log/syslog
journalctl -u unitxxx.service
Jun 30 13:51:46 host unitxxx[1437]: time="2018-06-30T11:51:46Z" level=info msg="127.0.0.1
Jun 30 15:02:15 host unitxxx[1437]: time="2018-06-30T13:02:15Z" level=info msg="127.0.0.1
Jun 30 15:33:02 host unitxxx[1437]: time="2018-06-30T13:33:02Z" level=info msg="127.0.0.1
Jun 30 15:56:31 host unitxxx[1437]: time="2018-06-30T13:56:31Z" level=info msg="127.0.0.1
Config rsyslog (System Logging Service)
# Create directory for log file
mkdir /var/log/unitxxx
# Then add config file /etc/rsyslog.d/unitxxx.conf
if $programname == 'unitxxx' then /var/log/unitxxx/unitxxx.log
& stop
Restart rsyslog
systemctl restart rsyslog.service
In my case 2>&1(stdout and stderr file descriptor symbol) had to be placed correctly,then log redirection worked as I expected
[Unit]
Description=events-server
[Service]
User=manjunath
Type=simple
ExecStart=/bin/bash -c '/opt/events-server/bin/start.sh my-conf 2>&1 >> /var/log/events-server/events.log'
[Install]
WantedBy=multi-user.target
Make your service file call a shell script instead of running the app directly. This way you have extra control. For example, you can make output files like those in /var/log/
Make a shell script like /opt/myapp/myapp.sh
#!/bin/sh
/usr/sbin/logrotate --force /opt/myapp/myapp.conf --state /opt/myapp/state.tmp
logger "[myapp] Run" # send a marker to syslog
myapp > /opt/myapp/myapp.log 2>&1 &
And your service file myapp.service contains:
...
[Service]
Type=forking
ExecStart=/bin/sh -c /opt/myapp/myapp.sh
...
A sample of log config file /opt/myapp/myapp.conf
/opt/myapp/myapp.log {
daily
rotate 20
missingok
compress
}
Then you will get myapp.log, and zipped myapp.log.1.gz ... for each time the service was started, and previous zipped.

Find out which user accessed a particular file on what time in UNIX

can someone suggest me any command that i can use to see which user accessed a particular file on what time in UNIX. I know history command lists the commands fired previously, but it doesn't include "who" fired it and on what time.
Use Linux auditd for a particular file
http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
Example
Let say I have a file (let it be $HOME/an_important_file.txt) and I want to watch all accesses to it. First set up audit rule for it:
$ sudo auditctl -w $PWD/an_important_file.txt -p warx -k watch_an_important_file
And checked the audit log:
$ sudo ausearch -k watch_an_important_file
----
time->Thu May 12 10:54:16 2016
type=CONFIG_CHANGE msg=audit(1463039656.913:278): auid=500 ses=1 subj=unconfined_u:unconfined_r:auditctl_t:s0-s0:c0.c1023 op="add rule" key="watch_an_important_file" list=4 res=1
Then I modified the file with touch ($ touch $HOME/an_important_file.txt). I am checking again the audit log:
$ sudo ausearch -k watch_an_important_file
----
time->Thu May 12 10:54:16 2016
type=CONFIG_CHANGE msg=audit(1463039656.913:278): auid=500 ses=1 subj=unconfined_u:unconfined_r:auditctl_t:s0-s0:c0.c1023 op="add rule" key="watch_an_important_file" list=4 res=1
----
time->Thu May 12 10:56:42 2016
type=PATH msg=audit(1463039802.788:291): item=1 name=(null) inode=535849 dev=fd:02 mode=0100664 ouid=500 ogid=500 rdev=00:00 obj=unconfined_u:object_r:user_home_t:s0 nametype=NORMAL
type=PATH msg=audit(1463039802.788:291): item=0 name="/home/Sergey.Kurenkov/" inode=524289 dev=fd:02 mode=040700 ouid=500 ogid=500 rdev=00:00 obj=unconfined_u:object_r:user_home_dir_t:s0 nametype=PARENT
type=CWD msg=audit(1463039802.788:291): cwd="/usr"
type=SYSCALL msg=audit(1463039802.788:291): arch=c000003e syscall=2 success=yes exit=3 a0=7fff6d986060 a1=941 a2=1b6 a3=3149b8f14c items=2 ppid=4852 pid=10022 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=pts1 ses=1 comm="touch" exe="/bin/touch" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="watch_an_important_file"
You can use stat to find out when a file was last accessed. This is only possible if your file system stores the atime of inodes. But this does not tell you who accessed the file.
You can use lsof to list processes which currently use a file. But you might not see processes of other users if your user has insufficient privileges (you can see all processes if you are root).
Normally the output of history is generated from a history file of the executing user. So you can assume that the commands printed by history where all executed by the same user. In some shells you can set an option in order to store the time of execution together with the command. Then you can also get this time with history. This might depend on the shell you are using.
You can read the man pages of stat, lsof, bash or zsh (or maybe ksh?) to learn more about this.
You can add these following lines in ~/.bashrc so that now history command
logs the commands in the [<user> 2016-05-11 14:04:33] <command> format. The below commands apply to all open interactive terminals.
export HISTFILESIZE=100000000
export HISTSIZE=100000000
# First two are optional, they need to be changed only if the default 500
# lines history logging needs to be changed
export HISTTIMEFORMAT="[$USER %F %T] "
HISTCONTROL=ignoredups:erasedups
shopt -s histappend
PROMPT_COMMAND="history -n; history -w; history -c; history -r; $PROMPT_COMMAND"
Original answer with modification done to store the $USER

SSH "Login monitor" for Linux

I'm trying to write a script that informs the user when someone has logged in on the machine via ssh.
My current idea is to parse the output of "w" using grep in intervals.
But that's neither elegant nor performant. Has anyone got a better idea how to implement such a program?
Any help would really be appreciated!
Paul Tomblin has the right suggestion.
Set up logging in your sshd_config to point to a syslog facility that you can log separately:
=> see man 3 syslog for more facilities. Choose one like e.g.
# Logging
SyslogFacility local5
LogLevel INFO
Then set up your syslog.conf like this:
local5.info |/var/run/mysshwatcher.pipe
Add the script you're going to write to /etc/inittab so it keeps running:
sw0:2345:respawn:/usr/local/bin/mysshwatcher.sh
then write your script:
#!/bin/sh
P=/var/run/mysshwatcher.pipe
test -p $P || mkfifo $P
while read x <$P; do
# ... whatever, e.g.:
echo "ssh info: $x" | wall
done;
Finally, restart your syslogd and get your inittab reloaded (init q) and it should work. If other variantes of these services are used, you need to configure things accordingly (e.g. newsyslogd => /etc/newsyslog.conf; Ubuntu: /etc/event.d isntead of inittab)
This is very rudimentary and lacking, but should be enough to get you started ...
more info: man sshd_config for more logging options/verbosity.
On Ubuntu (and I'd guess all other Debian distros, if not all Linuces), the file /var/log/auth.log records successful (and unsuccessful) login attempts:
sshd[XXX]: pam_unix(sshd:session): session opened for user XXX
You could set up a very simple monitor using this command (note that you have to be root to see the auth log):
sudo tail -F /var/log/auth.log | grep sshd
If you do not care how they logged in (telnet/ssh), the 'last' Unix command line utility shows you the last few logins in the machine. Remote users will show the IP address
[root#ex02 www]# last
foo pts/1 81.31.x.y Sun Jan 18 07:25 still logged in
foo pts/0 81.31.x.y Sun Jan 18 01:51 still logged in
foo pts/0 81.31.x.y Sat Jan 17 03:51 - 07:52 (04:00)
bar pts/5 199.146.x.y Fri Jan 16 08:57 - 13:29 (04:32
Set up a named pipe, and set up a log file parser to listen to it, and send the ssh messages to it. The log file parser can do what you want, or signal to a daemon to do it.
Redirecting the log file is done in a config file in /etc/ whose name escapes me right now. /etc/syslog.conf, I think.
I have made a program (which i call Authentication Monitor) that solves the task described in the question.
If you wanted to, you are more than welcome to download it to investigate how I solve this problem (using log-files).
You can find Authentication Monitor freely available here: http://bwyan.dk/?p=1744
We had the same problem, so we wrote our own script.
It can be downloaded from the github.
Hope it helps :)
cheers!
Ivan

Resources