Forensic analysis - process log - linux

I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.

If there was no "command accounting" enabled, there is no.

Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).

So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.

Related

Log processes and review its permissions

I need to check the permissions of all processes on Linux to see if some of them are not running with unnecessarily high privileges (with sudo or even worse as root). Is there a way to log processes to be able to check them subsequently?
I have checked the journal for commands, but I'm not sure if that's all or if I should look for something else as well.

I observed a Java process running at root level through top command on my application server, will it lead to performance problems?

We were running a load test and simultaneously executed top command and observed that Java process (running at root level) was consuming 204℅ cpu, even though we ran just 10℅ of expected load on server.
Also one of my colleagues said that a Java process should not be running at root level as this leads to performance issues.
I tried searching the internet but could not find anything which says that Java process should not run at root level.
Note for experts :- please excuse me for my lack of knowledge, please do not download or block the question.
Screen shot of top command:
That's incorrect -- running a process as root will not affect performance, but will likely affect security.
The reason why everyone says not to run your processes as root unless ABSOLUTELY NECESSARY is because the root user has privileges over the entire disk, and many other things: external devices, hardware, processes, etc.
Running code that interacts with the world as root means that if anyone can find a vulnerability in your code / project / process / whatever, the amount of damage / harm that can be done is likely WAY MORE than what could be possible by a non-root user.
Try running the below command to find all the processes in Tree Structure.
ps -e -o pid,args --forest
From the output, you will be able to figure out those java processes or other processes running at Root level are children of whom. For ex. sometimes while testing some scripts, we ourselves trigger those scripts with sudo which might in turn starts the java instance.

syslog: does it remove the old logs if there would be less space on the storage

I am using syslog on an embedded Linux device (Debian-arm) that has a relatively smaller storage (~100 MB storage). If we assume the system will be up for 30 years and it logs all possible activities, would there be a case that the syslog fills up the storage memory? If it is the case, is syslog intelligent enough to remove old logs as there would be less space on the storage medium?
It completely depends how much stuff gets logged, but if you only have ~100MB, I would imagine that it's certainly likely that your storage will fill up before 30 years!
You didn't say which syslog server you're using. If you're on an embedded device you might be using the BusyBox syslogd, or you may be using the regular syslogd, or you may be using rsyslog. But in general, no syslog server rotates log files all by itself. They all depend on external scripts run from cron to do it. So you should make sure you have such scripts installed.
In non-embedded systems the log rotation functionality is often provided by a software package called logrotate, which is quite elaborate and has configuration files to say how and when which log files should be rotated. In embedded systems there is no standard at all. The most common configuration (especially when using BusyBox) is that logs are not written to disk at all, only to a memory ring buffer. The next most common configuration is idiosyncratic ad-hoc scripts built and installed by the embedded system integrator. So you just have to scan the crontabs and see if you can see anything that's configured to be invokes that looks like a log rotater.

Linux service crashes

I have a linux service (c++, with lots of loadable modules, basically .so files picked up at runtime) which from time to time crashes ... I would like to get behind this crash and investigate it, however at the moment I have no clue how to proceed. So, I'd like to ask you the following:
If a linux service crashes where is the "core" file created? I have set ulimit -c 102400, this should be enough, however I cannot find the core files anywhere :(.
Are there any linux logs that track services? The services' own log obviously is not telling me that I'm going to crash right now...
Might be that one of the modules is crashing ... however I cannot tell which one. I cannot even tell which modules are loaded. Do you know how to show in linux which modules a service is using?
Any other hints you might have in debugging a linux service?
Thanks
f-
Under Linux, processes which switch user ID, get their core files disabled for security reasons. This is because they often do things like reading privileged files (think /etc/shadow) and a core file could contain sensitive information.
To enable core dumping on processes which have switched user ID, you can use prctl with PR_SET_DUMPABLE.
Core files are normally dumped in the current working directory - if that is not writable by the current user, then it will fail. Ensure that the process's current working directory is writable.
0) Get a staging environment which mimics production as close as possible. Reproduce problem there.
1) You can attach to a running process using gdb -a (need a debug build of course)
2) Make sure the ulimit is what you think it is (output ulimit to a file from the shell script
which runs your service right before starting it). Usually you need to set ulimit in /etc/profile file; set it ulimit -c 0 for unlimited
3) Find the core file using find / -name \*core\* -print or similar
4) I think gdb will give you the list of loaded shared objects (.so) when you attach to the process.
5) Add more logging to your service
Good luck!
Your first order of business should be getting a core file. See if this answer applies.
Second, you should run your server under Valgrind, and fix any errors it finds.
Reproducing the crash when running under GDB (as MK suggested) is possible, but somewhat unlilkely: bugs tend to hide when you are looking for them, and the debugger may affect timing (especially if your server is multi-threaded).

Core dump file is not generated

Every time, my application crash a core dump file is not generated. I remember that few days ago, on another server it was generated. I'm running the app using screen in bash like this:
#!/bin/bash
ulimit -c unlimited
while true; do ./server; done
As you can see I'm using ulimit -c unlimited which is important if I want to generate a core dump, but it still doesn't generate it, when I got an segmentation fault.
How can I make it work?
This link contains a good checklist why core dumps are not generated:
The core would have been larger than the current limit.
You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
Verify that the file system is writeable and have sufficient free space.
If a sub directory named core exist in the working directory no core will be dumped.
If a file named core already exist but has multiple hard links the kernel will not dump core.
Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
Verify that the process has not changed working directory, core size limit, or dumpable flag.
Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
The application called exit() instead of using the core dump handler.
Make sure your current directory (at the time of crash -- server may change directories) is writable. If the server calls setuid, the directory has to be writable by that user.
Also check /proc/sys/kernel/core_pattern. That may redirect core dumps to another directory, and that directory must be writable. More info here.
For systemd systems1, install the package systemd-coredump. Coredumps can be found via:
ls /var/lib/systemd/coredump
Furthermore, these coredumps are compressed in the lz4 format. To decompress, you can use the package liblz4-tool like this: lz4 -d FILE. To be able to debug the decompressed coredump using gdb, I also had to rename the utterly long filename into something shorter...
1 Debian 9 Stretch
Check:
$ sysctl kernel.core_pattern
to see how your dumps are created (%e will be the process name, and %t will be the system time).
For Ubuntu, dumps are created by apport in /var/crash, but in different format (see inside file).
You can test it by:
sleep 10 &
killall -SIGSEGV sleep
If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.
Read more:
How to generate core dump file in Ubuntu
https://wiki.ubuntu.com/Apport
Remember if you are starting the server from a service, it will start a different bash session so the ulimit won't be effective there. Try to put this in your script itself:
ulimit -c unlimited
If one is on a Linux distro (e.g. CentOS, Debian) then perhaps the most accessible way to find out about core files and related conditions is in the man page. Just run the following command from a terminal:
man 5 core
Also, check to make sure you have enough disk space on /var/core or wherever your core dumps get written. If the partition is almos full or at 100% disk usage then that would be the problem. My core dumps average a few gigs so you should be sure to have at least 5-10 gig available on the partition.
Note: If you have written any crash handler yourself, then the core might not get generated. So search for code with something on the line:
signal(SIGSEGV, <handler> );
so the SIGSEGV will be handled by handler and you will not get the core dump.
The answers given here cover pretty well most scenarios for which core dump is not created. However, in my instance, none of these applied. I'm posting this answer as an addition to the other answers.
If your core file is not being created for whatever reason, I recommend looking at the /var/log/messages. There might be a hint in there to why the core file is not created. In my case there was a line stating the root cause:
Executable '/path/to/executable' doesn't belong to any package
To work around this issue edit /etc/abrt/abrt-action-save-package-data.conf and change ProcessUnpackaged from 'no' to 'yes'.
ProcessUnpackaged = yes
This setting specifies whether to create core for binaries not installed with package manager.
If you call daemon() and then daemonize a process, by default the current working directory will change to /. So if your program is a daemon then you should be looking for a core in / directory and not in the directory of the binary.
Although this isn't going to be a problem for the person who asked the question, because they ran the program that was to produce the core file in a script with the ulimit command, I'd like to document that the ulimit command is specific to the shell in which you run it (like environment variables). I spent way too much time running ulimit and sysctl and stuff in one shell, and the command that I wanted to dump core in the other shell, and wondering why the core file was not produced.
I will be adding it to my bashrc. The sysctl works for all processes once it is issued, but the ulimit only works for the shell in which it is issued (maybe also the descendents too) - but not for other shells that happen to be running.
Just in case someone else stumbles on this. I was running someone else's code - make sure they are not handling the signal, so they can gracefully exit. I commented out the handling, and got the core dump.
In centos,if you are not root account to generate core file:
you must be set the account has a root privilege or login root account:
vim /etc/security/limits.conf
account soft core unlimited account hard core unlimited
then if you in login shell with securecrt or other:
logout and then relogin
Allow Dump from Daemons
To allow all daemons witch are started by systemd to core dump.
Edit: /etc/systemd/system.conf add following
DefaultLimitCORE=infinity
Edit: /etc/sysctl.d/core.conf add following
kernel.core_pattern = /var/lib/coredumps/core-%e-sig%s-user%u-group%g-pid%p-time%t
kernel.core_uses_pid = 1
fs.suid_dumpable = 2
more detail: https://pve.proxmox.com/wiki/Enable_Core_Dump_systemd
Our application stopped producing core dumps when a capability was set to it.
setcap 'cap_sys_nice=eip' /usr/bin/${our_app}
Removing it allowed the re-generation of coredumps.
setcap '-r' /usr/bin/${our_app}
See also: How do I get a coredump from a setcap executable?

Resources