How do I determine whether linux process accounting (accton) is currently running? - linux

Is there any way to determine whether process accounting (accton) is running? There is no process listed in the process table ("ps"), and I see nothing under "/etc" that I can call with "status" to get a status of accounting.
I'm running a custom build based on "Linux From Scratch", so while I understand that CentOS has "psacct", I don't have that available.
I could watch the log file and see if it's growing - not ideal, but if it's all I got, then it's all I got. I'm hoping there's a better way.
Appreciate any info.

May be you can do like this. enable logging for your accton, an check the log file for update. like time stamp modification/lat access time/modify time ?
/sbin/accton /var/log/accto.log then the log will be created.
Then use some script to monitor your accto.log. There are lot of log file monitoring scripts for nagios if you are interested.

Related

Date of command in linux

I am using Debian and other people are using this same computer too. I need to know the date of each command were performed in this computer to discover who used it.
Do someone know which command that I can use to discover it?
I'm sorry for my poor english.
The history command might be what you are looking for.
In the bash shell, the history is generally stored in the $HOME/.bash_history file. The command help history will be useful if that is your shell.
If you have accounting turned on, the lastcomm command would be useful to look at; but you probably don't have accounting turned on.
If you just want to know who was logged in at a certain time, the last command is helpful too.
Auditing.
Configuring and auditing Linux systems with Audit daemon
The Linux Audit Daemon is a framework to allow auditing events on a
Linux system. Within this article we will have a look at installation,
configuration and using the framework to perform Linux system and
security auditing.
Auditing goals
By using a powerful audit framework, the system can track many event
types to monitor and audit the system. Examples include:
Audit file access and modification
See who changed a particular file
Detect unauthorized changes
Monitoring of system calls and functions
Detect anomalies like crashing processes
Set tripwires for intrusion detection purposes
Record commands used by individual users
This is the kind of thing auditing is designed to do.

How can I view CruiseControl.Net logs in real time?

I use CruiseControl.Net for continuous integration and I would like to read the log output of the current project in real time. For example, if it is running a compile command, I want to be able to see all the compile output so far. I can see where the log files are stored but it looks like they are only created once the project finishes. Is there any way to get the output in real time?
The CCTray app will allow you to see a snapshot of the last 5 or so lines of output of any command on a regular interval.
It's not a live update as that would be too resource intensive, as would be a full output of the log to-date.
Unless you write something to capture and store the snapshots you're out of luck. Doing this also presents to possibility of missing messages that appear between snapshots, so it would not be entirely reliable. It would however give you a slightly better idea of what is going on.
You can run ccnet.exe as a command line application instead of running ccservice as a Windows service. It will output to the terminal as it runs. It's useful for debugging.

Forensic analysis - process log

I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.

how can i tell (in bash script) if a clonezilla batch mode backup succeeded?

This is my first ever post to stackoverflow, so be gentle please. ;>
Ok, I'm using slightly customized Clonezilla-Live cd's to backup the drives on four PCs. Each cd is for a specific PC, saving an image of its disk(s) to a box-specific backup folder on a samba server. That's all pretty much working. But once in a while, Something Goes Wrong, and the backup isn't completed properly. Things like: the cat bit through a cat5e cable; I forgot to check if the samba server had run out of room; etc. And it is not always readily apparent that a failure happened.
I will admit right now that I am pretty much a noob as far as linux system administration goes, even though i managed somehow to setup a centos 6 box (i wish i'd picked ubuntu...) with samba, git, ssh, and bitnami-gitlab back in february.
I've spent days and days and days trying to figure out if clonezilla leaves a simple clue in a backup as to whether it succeeded completely or not, and have come up dry. Looking in the folder for a particular backup job (on the samba server) I see that the last file written is named "clonezilla-img". It seems to be a console dump that covers the backup itself. But it does not seem to include the verification pass.
Regardless of whether the batch backup task succeeded or failed, I can run a post-process bash script automagically, that I place on my clonezilla cds. I have this set to run just fine, though its not doing a whole lot right now. What I would like this post-process script to do is determine if the backup job succeeded or not, and then rename (mv) the backup job directory to include some word like "SUCCESS" or "FAILURE". I know how to do the renaming part. It's the test for success or failure that I'm at a loss about.
Thanks for any help!
I know this is old, but I've just started on looking into doing something very similar.
For your case i think you could do what you are looking for with ocs_prerun and ocs_postrun scripts.
For my setup I'm using a pen/falsh drive for some test systems and also pxe with NFSmount. PXE and nfs are much easier to test and modify quickly.
I haven't tested this yet, but I was thinking that I might be able to search the logs in /var/log/{clonezilla.log,partclone.log} via an ocs_postrun script to validate success or failure. I haven't seen anything that indicates the result is set in the environment so I'm thinking the logs might be the quick easy method over mounting or running a crc check. Clonezilla does have an option to validate the image, the results of which might be in the local logs.
Another option might be to create a custom ocs_live_run script to do something similar. There is an example at this URL http://clonezilla.org/fine-print-live-doc.php?path=./clonezilla-live/doc/07_Customized_script_with_PXE/00_customized_script_with_PXE.doc#00_customized_script_with_PXE.doc
Maybe in the script the exit code of ocs-sr can be checked? As I said I haven't tried any of this, just some thoughts.
I updated the above to reflect the log location (/var/log). The logs are in the log folder of course. :p
Regards

Linux service crashes

I have a linux service (c++, with lots of loadable modules, basically .so files picked up at runtime) which from time to time crashes ... I would like to get behind this crash and investigate it, however at the moment I have no clue how to proceed. So, I'd like to ask you the following:
If a linux service crashes where is the "core" file created? I have set ulimit -c 102400, this should be enough, however I cannot find the core files anywhere :(.
Are there any linux logs that track services? The services' own log obviously is not telling me that I'm going to crash right now...
Might be that one of the modules is crashing ... however I cannot tell which one. I cannot even tell which modules are loaded. Do you know how to show in linux which modules a service is using?
Any other hints you might have in debugging a linux service?
Thanks
f-
Under Linux, processes which switch user ID, get their core files disabled for security reasons. This is because they often do things like reading privileged files (think /etc/shadow) and a core file could contain sensitive information.
To enable core dumping on processes which have switched user ID, you can use prctl with PR_SET_DUMPABLE.
Core files are normally dumped in the current working directory - if that is not writable by the current user, then it will fail. Ensure that the process's current working directory is writable.
0) Get a staging environment which mimics production as close as possible. Reproduce problem there.
1) You can attach to a running process using gdb -a (need a debug build of course)
2) Make sure the ulimit is what you think it is (output ulimit to a file from the shell script
which runs your service right before starting it). Usually you need to set ulimit in /etc/profile file; set it ulimit -c 0 for unlimited
3) Find the core file using find / -name \*core\* -print or similar
4) I think gdb will give you the list of loaded shared objects (.so) when you attach to the process.
5) Add more logging to your service
Good luck!
Your first order of business should be getting a core file. See if this answer applies.
Second, you should run your server under Valgrind, and fix any errors it finds.
Reproducing the crash when running under GDB (as MK suggested) is possible, but somewhat unlilkely: bugs tend to hide when you are looking for them, and the debugger may affect timing (especially if your server is multi-threaded).

Resources