.bashrc and various linux startup scripts - linux

On linux(eg. centos), if I need to run some start-up script, what are the various start-up places to call the script and what is each one's convention?

Vixie cron(8) lets you use the #reboot specifier to run programs at startup. This can be in your /etc/crontab or any user's personal crontab(5) file. I wouldn't recommend programmatic use of these files, leave them for the admins. (Though giving commands for admins to copy-and-paste into their crontab(5) is probably friendly.)
You can place startup scripts in the standard SysV init /etc/init.d/ directory and create the matching symbolic links into /etc/rc*.d directories. I imagine init(8) has details on the scheme in place.
There is often an /etc/rc.local file or similar file available for system administrators to configure. I wouldn't recommend programmatic use of this file, leave it for the admins.
Depending upon how far along Centos is in converting to using upstart, you can place job specifications into /etc/init. These look much easier to write than initscripts, but they are sadly very underdocumented at the moment.
.bashrc and /etc/profile, etc., is a complete red herring. Any shell startup scripts are for system administrator configuration or user configuration. Programmers should stay away.

Related

What is the difference between /etc/rc.local and ~/.bashrc?

This is a linux related problem. I have searched around but did not get a good explanation.
It seems to me that both file configure the setup when I log in, but is there any difference? I notice that there seems to be "some rule" in deciding what should go into two different files. For example, if I need to add a specific search path to $PATH, I should do it in ~/.bashrc. But if I decide to change some system setting, like
/sys/class/backlight
or
/sys/devices/cpu/cpu#/online
then I have to do this in /etc/rc.local, otherwise it will not work.
Is it because these configurations can not differ between users?
Thanks.
The difference is in when they are run and who they're running as when run i.e. rc.local is run on a change of run level and it runs as root. bashrc is bash specific and run on a non login shell as a particular user.
You can find a good explanation of rc.local here
The script /etc/rc.local is for use by the system administrator. It is
traditionally executed after all the normal system services are
started, at the end of the process of switching to a multiuser
runlevel. You might use it to start a custom service, for example a
server that's installed in /usr/local. Most installations don't need
/etc/rc.local, it's provided for the minority of cases where it's
needed.
and you can find what you need about bashrc
man bash
When an interactive shell that is not a login shell is started, bash
reads and executes commands from ~/.bashrc, if that
file exists. This may be inhibited by using the --norc option.
The --rcfile file option will force bash to read and
execute commands from file instead of ~/.bashrc.
There's more info on bashrc in this question...
https://superuser.com/questions/49289/what-is-the-bashrc-file
This question was asked by me a month ago, though later I realized that stack overflow is not the best site for this Linux question. Thanks for people who answered this question earlier, but I would like to add some more explanation here.
Basically there are (at least) three stages where a user may change system environment in Linux:
when the system boots; This stage is most appropriate if we fancy permanent system setting, and should be made via /etc/.... For example, in my original question, the backlight, as well as on-line/off-line management of some CPUs can be set in this way, and /etc/rc.local is the right shell script I should edit. By "permanent", it means that this update will affect all users using the system.
when a user logs in; This stage is most appropriate if a user only wants to change his personal Linux environment. Therefore, files under ~/ (or HOME) should be the right place to look for. For example, ~/.profile (historically referred to as ./bash_profile or ~/bash_login) is a shell script run at login time. ~/pam_environment is not a shell script, but useful for setting environmental variables (see Ubunte-official-wiki-environmental_variables for more information).
when a user starts a bash shell; This stage is more restricted, as it only has effects inside a bash shell (as well as its child processes), hence does not affect GUI environment. So if a user is doing most of his job from a shell, then this is an appropriate stage to go for. The shell script related to this stage is ~/.bashrc. For example, environmental variables PATH can be changed here.
Hopefully this summary is more intuitive than technical.
.bashrc runs for each bash session started (i.e. every time you open a shell). It sounds as though you're talking of .bashrc as if it were .bash_profile which is run once per login.
Depending on what kind of setup you're running the rc.local is a legacy construct but, traditionally it was run on the last run level during start up. You can see this link for other related posts talking about rc.local.
If you're on a system running systemd this is usually included by default in the systemd package systemd-backlight.service.

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Is there a way to trigger a script (ksh) as non-root user on logon, other than adding it to .profile?

Since I dont have root access, I cannot add the script to .profile.
I tried to use crontab to make it run every 10 secs. Even that is not allowed, since I am not root.
In executing the program in cron there is no need of root permission. Because
each and every user should have the separate cron entry. So you can do that using cron.
Your ~/.profile should be under your control, as should your own crontab. That is, unless your sysadmin has gone out of her way to prevent users from manipulating these. In which case, you will just need to talk to them.
If your ~/.profile is not loading you may want to try ~/.kshrc if you are using Korn Shell as your user's shell. Or if it's bash you can use ~/.bashrc or ~/.bash_profile (this is most common on linux).

Intricacies of Launching a complex shell script from CGI

Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.

SSH user lock to their home dir & one service

I'm really new to Linux. I Google'd for couple of days, and installed Java and Tomcat in CentOS.
Now I need a user, that has all privileges in their home directory (including files, subdirs and files in subdirs), but cannot access any other dir.
Also this user has to have a permission to manage one service (I created tomcat service, which I can 'start','stop' and 'restart').
Can anyone explain how to do this?
You've asked for a lot.
There's a few approaches possible:
Entirely with "native" Linux permissions
Using a mandatory access control system
Native Linux permissions
Create this new user their own new group. Make them the only member of this group.
Remove world read, write, execute permissions on all your data files. If any users were getting their privileges to the data files via world permissions, either create new groups for all the users and data as appropriate (maybe one for accounting, one for billing, one for sales, one for engineering, etc. Whatever works.)
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
If you really want to lock this user down, copy in the utilities that this person will need into their ~/bin/ dir and then proceed to remove the world execute bit on /bin, /sbin, /usr/bin, /usr/sbin. (Leave /lib, /usr/lib, etc. alone -- copying in the libraries this user will need is doubtless a lot of work.)
Mandatory access controls
I'll explain this using the AppArmor system; I've worked on AppArmor for over a decade, and it is the system I know best. There are more choices: TOMOYO, SMACK, and SELinux are all excellent tools. AppArmor and TOMOYO work on the idea that you confine access to pathnames. SMACK and SELinux work on the idea that every object on the system is assigned a label and the policy specifies which labels (on processes) can read, write, execute, etc. labels (on data or other processes). If you wanted to enforce a comprehensive Open, Classified, Secret, Top Secret style of protection, SMACK or SELinux would be the better tools. If you want to confine some programs to some files, AppArmor or TOMOYO would be the better tools.
AppArmor should come ready-to-use on most Ubuntu, SUSE, PLD, Annvix, Mandriva, and Pardus distributions.
The AppArmor system confines processes and controls how processes can move from domain to domain when the processes execute new programs. Domains are usually assigned by program.
The easiest way to get started is to copy /bin/bash to /bin/jail_bash (or some other name not in /etc/shells), set the shell for the user in /etc/passwd (chsh(1) can make this easy), and create an AppArmor profile for /bin/jail_bash that allows only the actions you want to allow. If we confine the process correctly, then the user cannot escape the profile we make for them.
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
In one terminal, run aa-genprof jail_bash. In another terminal, log in as the user (or otherwise run /bin/jail_bash) and begin doing tasks that you want to allow the person to do. We'll use what you do as training material to build a profile iteratively. You might be interested to watch /var/log/syslog or /var/log/audit/audit.log (if you have the auditd package installed) to see what operations AppArmor notices your program doing. Don't do too much at once -- just a few new things per iteration.
In the aa-genprof terminal, answer the questions as they come up. Allow what needs to be allowed. Deny what ought to be denied. When you are asked about execution privileges, prefer inherit or child over profile. (The profile option will influence every one else on the system. Inherit or child will only influence executions from whatever profile you're currently working on improving. Child breaks apart privileges into smaller pieces, while inherit keeps permissions in larger profiles. Prefer inherit for this case.)
Once you get to questions about executing tomcat, use the unconfined execute privilege. This is dangerous -- if a bug in the way tomcat is started allows people to start unconfined shells, then this can be used to break out of the jail. You could confine tomcat (and this is even a good idea -- tomcat isn't perfect) to prevent this from being an escape route, but that is probably not necessary right away.
AppArmor is designed to make it easy to grow the profiles on a system over time. AppArmor isn't applicable to all security situations, but we deployed scenarios very similar to this at the DEF CON Capture-the-flag hacking contest with excellent results. We had to allow fellow attackers root (and ephemeral user accounts) access to the machine via telnet, as well as POP3, SMTP, HTTP+CGI, and FTP.
Be sure to hand-inspect the profiles in /etc/apparmor.d/ before allowing your user to log in. You can fix anything you want with a plain text editor; run /etc/init.d/apparmor restart to reload all profiles (and unload the profiles you might remove).
It's handy to keep an unconfined root sash(1) shell open when you're first learning how to configure AppArmor. If you ignore the warning about programs that shouldn't have their own profile, it might be difficult to get back into your own system. (Don't forget about booting with init=/bin/sh in the worst of situations.)
You can easily create a very restricted environment by starting bash in restricted mode. Set the user's shell to rbash instead of bash, and that will put it into restricted mode.
http://www.gnu.org/s/bash/manual/html_node/The-Restricted-Shell.html
There's a chance that rbash will be too restrictive for your needs. Among other things, the restricted environment forbids changing directories. But take a look at it and see if it's sufficient for your needs.

Resources