Date of command in linux - linux

I am using Debian and other people are using this same computer too. I need to know the date of each command were performed in this computer to discover who used it.
Do someone know which command that I can use to discover it?
I'm sorry for my poor english.

The history command might be what you are looking for.
In the bash shell, the history is generally stored in the $HOME/.bash_history file. The command help history will be useful if that is your shell.
If you have accounting turned on, the lastcomm command would be useful to look at; but you probably don't have accounting turned on.
If you just want to know who was logged in at a certain time, the last command is helpful too.

Auditing.
Configuring and auditing Linux systems with Audit daemon
The Linux Audit Daemon is a framework to allow auditing events on a
Linux system. Within this article we will have a look at installation,
configuration and using the framework to perform Linux system and
security auditing.
Auditing goals
By using a powerful audit framework, the system can track many event
types to monitor and audit the system. Examples include:
Audit file access and modification
See who changed a particular file
Detect unauthorized changes
Monitoring of system calls and functions
Detect anomalies like crashing processes
Set tripwires for intrusion detection purposes
Record commands used by individual users
This is the kind of thing auditing is designed to do.

Related

Is there a way to nullify SIGSTOP for a certain script?

I have created a script and I want it to be virtually "immune" to SIGSTOP.
I understand that both SIGKILL and SIGSTOP cannot be trapped or ignored.
I know that the init system for Linux cannot receive a "fatal" signal due to it having the SIGNAL_UNKILLABLE flag on its signal struct flags (although the latter half of that sentence flies over my head for the most part).
I'm willing to edit my kernel to grant this script immunity, the only problem is that I don't know how.
So, my question is, is there a way to nullify SIGSTOP for a certain script/process?
I was able to deal with SIGKILL thanks to the Restart parameter in the service file for my script (using systemd), and while I have scrolled through the manuals looking for something similar for suspended processes, I haven't found anything yet.
Is there anything similar to Restart=always for process suspension caused by SIGSTOP?
I would rather not have to go through the process of changing things in or related to the kernel, but if it's the only way I will.
Thanks.
Okay so the best solution I can come up with is SELinux.
SELinux is a kernel add-on created by the NSA that was later released to the public. It is commonly used on Linux systems and comes by default on Android devices. SELinux allows the creation of "contexts". Contexts are an additional label provided to files and processes that allow the subdivision of responsibility and permissions.
How does this solve your problem? Well, you can limit your SELinux permissions for your user processes (even for the root user) so that you're not even allowed to signal this other process at all. In fact, you can prevent any interaction with it whatsoever. If you'd like, you could go so far as to prevent yourself from even being able to turn SELinux off (although it's probably better that you don't if you can avoid it from an operational perspective). This is at some level probably the closest you'll get to a solution that is anywhere near the range of not-hackable. That being said, SELinux setup and configuration for this purpose is not exactly a walk in the park. Documentation is limited (but exists), distro-specific, and in some cases even esoteric. I do have some experience with SELinux myself.
Edit:
Doing some quick googling, it appears possible to install SELinux on Arch, but like most things on Arch, it requires some effort - more than should fit in a StackOverflow comment block. However I'll briefly describe your set of goals here once SELinux is installed:
Determine the context that you are currently in. Using the "id" command should provide this context.
Use a context process transition so that when you execute your script, that script runs in a new context. You will probably need to create a new context for your script to run in.
Create sepolicy rules allowing that script to interact with your processes however you need. Perhaps this includes the ability to kill other processes in a different context, or read from a tcp port using sniffing, etc. etc. You can use the audit2allow program to help you create these rules.
By default, SELinux denies anything it doesn't explicitly allow. Your goal now is to make sure that everything you might want to do on your system is allowed, and add policy rules to allow all those things. Looking at the SELinux audit logs is a great way to see everything SELinux is complaining about - it's your job to go through and convert all those audit failures into "allow" rules.
Once all that is done, just make sure not to "allow" whatever context your processes/shell start in from being able to kill or signal the context that your script runs in, and you should be done. Now trying to SIGSTOP or SIGKILL should generate a "Permission denied error".

Configure Chef to only report issues

I'm trying to configure a automation testing system on Linux that reports any inconsistency such as incorrect file permissions, failed services, etc on a custom Linux OS. I can write my own script to do that, but I need a general solution that supports a wide variety of situations and systems.
So, I was wondering if I can configure Chef to only report problems and inconsistencies on Linux, but not fix them?
Kind of. We have a system called "why run mode" which tries to do a dry-run check of what Chef would probably do if executed. Unfortunately because Chef code is, at heart, arbitrary executable Ruby code we can't be 100% certain. That said, I would try it out and see if the output is enough for your use.

Intricacies of Launching a complex shell script from CGI

Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.

How do I determine whether linux process accounting (accton) is currently running?

Is there any way to determine whether process accounting (accton) is running? There is no process listed in the process table ("ps"), and I see nothing under "/etc" that I can call with "status" to get a status of accounting.
I'm running a custom build based on "Linux From Scratch", so while I understand that CentOS has "psacct", I don't have that available.
I could watch the log file and see if it's growing - not ideal, but if it's all I got, then it's all I got. I'm hoping there's a better way.
Appreciate any info.
May be you can do like this. enable logging for your accton, an check the log file for update. like time stamp modification/lat access time/modify time ?
/sbin/accton /var/log/accto.log then the log will be created.
Then use some script to monitor your accto.log. There are lot of log file monitoring scripts for nagios if you are interested.

How to determine that the shell script is safe

I downloaded this shell script from this site.
It's suspiciously large for a bash script. So I opened it with text editor and noticed
that behind the code there is a lot of non-sense characters.
I'm afraid of giving the script execution right with chmod +x jd.sh. Can you advise me how to recognize if it's safe or how to set it's limited rights in the system?
thank you
The "non-sense characters" indicate binary files that are included directly into the SH file. The script will use the file itself as a file archive and copy/extract files as needed. That's nothing unusual for an SH installer. (edit: for example, makeself)
As with other software, it's virtually impossible to decide wether or not running the script is "safe".
Don't run it! That site is blocked where I work, because it's known to serve malware.
Now, as to verifying code, it's not really possible without isolating it completely (technically difficult, but a VM might serve if it has no known vulnerabilities) and running it to observe what it actually does. A healthy dose of mistrust is always useful when using third-party software, but of course nobody has time to verify all the software they run, or even a tiny fraction of it. It would take thousands (more likely millions) of work years, and would find enough bugs to keep developers busy for another thousand years. The best you can usually do is run only software which has been created or at least recommended by someone you trust at least somewhat. Trust has to be determined according to your own criteria, but here are some which would count in the software's favor for me:
Part of a major operating system/distribution. That means some larger organization has decided to trust it.
Source code is publicly available. At least any malware caused by company policy (see Sony CD debacle) would have a bigger chance of being discovered.
Source code is distributed on an appropriate platform. Sites like GitHub enable you to gauge the popularity of software and keep track of what's happening to it, while a random web site without any commenting features, version control, or bug database is an awful place to keep useful code.
While the source of the script does not seem trustworthy (IP address?), this might still be legit. With shell scripts it is possible to append binary content at the end and thus build a type of installer. Years ago, Sun would ship the JDK for Solaris in exactly that form. I don't know if that's still the case, though.
If you wanna test it without risk, I'd install a Linux in a VirtualBox (free virtual-machine software), run the script there and see what it does.
Addendum on see what it does: There's a variety of tools on UNIX that you can use to analyze a binary program, like strace, ptrace, ltrace. What might also be interesting is running the script using chroot. That way you can easily find all files that are installed.
But at the end of the day this will probably yield more binary files which are not easy to examine (as probably any developer of anti-virus software will tell you). Therefore, if you don't trust the source at all, don't run it. Or if you must run it, do it in a VM where at least it won't be able to do too much damage or access any of your data.

Resources