In my home folder in Linux I have several config files that have "rc" as a file name extension:
$ ls -a ~/|pcregrep 'rc$'
.bashrc
.octaverc
.perltidyrc
.screenrc
.vimrc
What does the "rc" in these names mean?
It looks like one of the following:
run commands
resource control
run control
runtime configuration
Also I've found a citation:
The ‘rc’ suffix goes back to Unix's grandparent, CTSS. It had a command-script feature called "runcom". Early Unixes used ‘rc’ for the name of the operating system's boot script, as a tribute to CTSS runcom.
Runtime Configuration normally if it's in the config directory. I think of them as resource files. If you see rc in file name this could be version i.e. Release Candidate.
Edit: No, I take it back officially... "run commands"
[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc]
Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.
Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
Quoted from What does “rc” in .bashrc stand for?
I learnt something new! :)
In the context of Unix-like systems, the term rc stands for the phrase "run commands". It is used for any file that contains startup information for a command. It is believed to have originated somewhere in 1965 from a runcom facility from the MIT Compatible Time-Sharing System (CTSS).
Reference: https://en.wikipedia.org/wiki/Run_commands
In Unix world, RC stands for "Run Control".
http://www.catb.org/~esr/writings/taoup/html/ch10s03.html
Figure I would add my finding on a previous dive into this subject.
The short, imho: The rc in both bashrc and init rc both stand for runcom, short for run commands.
The init origin owing as a homage to the runcom's of CTSS while in the case of shells, the "shell" and indeed the underlying macro procedure processor are all direct descendant of the CTSS concept first described by Louis Pouzin in 1965. See snippets below
SUBJECT: The SHELL: A Global Tool for Calling and Chaining Procedures in the System
FROM: Louis Pouzin .
DATE: April 2, 1965
https://people.csail.mit.edu/saltzer/Multics/Multics-Documents/MDN/MDN-4.pdf
SUBJ: RUNCOM • A Macro Procedure Processor for the 636 System
FROM:FROM: Louis Pouzin .
DATE: April 7 1965
https://people.csail.mit.edu/saltzer/Multics/Multics-Documents/MDN/MDN-5.pdf
The SHELL
4.1 We may envision a common procedure called automatically by
the supervisor whenever a user types in some message at his console,
at a time when he has no other process in active execution under con-
sole control (presently called command level). This procedure acts
as an interface between console messages and subroutine. The purpose
of such a procedure is to create a medium of exchange into which one
could activate any procedure, ~ if _g ~ called~~ inside of
another program. Hereafter, for simplification, w·e shall refer to that
procedure as the ''SHELL
Requests Stacking
7.1 The chaining of requests, similar to those typed at the console,
is straightforward. Consecutive calls to the SHELL, from any procedure,
and at any level of recursion, allows an unlimited chaining of requests.
7.2 Another feature con~only used on the present system is the
execution of a stack of requests stored into a BCD file. This mode is
a easy variation, as it oonsist:s in reading a block of several BCD request
strings, and postpone the return to the calling program until the block has
been exhausted. Due to the present system conventions, the SHELL selects.
this mode of execution when the name of the request is RUNCCflM, while the
first argument is· the BCD namE! of the file. But any other convention may
work as well.
In addition to this, I offer "mk -- how to remake the system and commands" from the Unix Users Manual Release 3, June 1980
"The lib directory contains libraries used when loading user programs. The
largest and most important of these is the C library. All libraries are in
sub-directories and are created by a makefile or runcom. A runcom is a
Shell command procedure used specifically to remake a piece of the system.
:lib will rebuild the libraries that are given as arguments."
http://bitsavers.trailing-edge.com/pdf/att/unix/System_III/UNIX_Users_Manual_Release_3_Jun80.pdf
Further, interestingly the original Bourne shell, bsh, did not have a file read when started like csh and ksh which came after.
https://www.ibm.com/support/pages/overview-shell-startup-files
Given the time both ksh and csh came out and both make use of a start-up shell initializer of stacked commands it really makes a lot of sense it would be the shell's startup runcom.
-IdS
To understand rc files, it helps to know that Ubuntu boots into several different runlevels. They are 0-6, 0 being "halt", 1 being "single-user", 2 being "multi-user"(the default runlevel), etc. This system has now been outdated by the Upstart and initd programs in most Linux Distros. It is still maintained for backwards compatibility.
Within the /etc directory are several folders labeled "rc0.d, rc1.d" etc, through rc6.d. These are the directories the kernel refers to to know which init scripts it should run for that runlevel. They are symbolic links to the system service scripts residing in the /etc/init.d directory.
In the context you are using it, it would appear that you are listing any files with rc in the name. The code in these files will set the way the services/tasks startup and run when initialized.
Related
This came up in coursework, and I'm stuck:
Many systems have more than one version of a utility program so that users can choose the one they want. Suggest a command to find all the versions of make on a system. What determines which one a user actually gets? How might a user override the defaults?
How would you do that?
How UNIX finds programs
Unix-like systems store their executable programs in various directories for historical reasons.
The directories that are searched when you want to run a command are stored in an environment variable called $PATH, separated by colons (:). To see its contents, type echo "$PATH" in a terminal window. On my system, that shows (split to avoid a long line)
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:
/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
They're searched in that order. If I want to run make, the system will first check /usr/local/sbin/make (which doesn't exist), then /usr/local/bin/make (also non-existant), then /usr/bin/make (which does exist, so it runs that).
How to figure out which one would run
The program which can be used to look through $PATH to figure out what program would be chosen. Running which make on my system produces the output /usr/bin/make.
Conveniently, which has a -a flag to print all executables that match, not just the first one. (I found this by consulting its manual, by running man which.) So which -a java should tell you where all of the versions of java are.
Changing the defaults
If you like, you can change the contents of the $PATH variable, like you can change any environment variable: If I run PATH="$PATH:/home/anko/bin", the next time the system needs to find a program, it will check through all of what $PATH used to be, plus a directory called bin in my home directory if it couldn't find anything else.
I could also prepend the directory, to make it take precedence over anything else, by doing PATH="/home/anko/bin:$PATH".
Why is bash(.sh) script not executable by default.
I agree that while we touch any file in linux it is created for reading purpose.
But since file name extensions such as sh and csh are for execution purpose.
Won't it be ideal to touch them in an executable mode.
Question might sound redundant to, but i still thought of asking it :)
Ultimately the answer to this question is that that's not what touch was designed to do.
In fact, touch was not even designed to create files; its primary purpose in life is to change file timestamps. It is merely a side effect of that purpose, and the fact that the designers decided to be a little more generous and create the target file if it doesn't already exist (and if you didn't provide the -c option), that it allows you to create files at all.
It should also be mentioned that there are other techniques that create files, such as redirection (echo 'echo '\''I am a script.'\'';' >|script.sh;). The act of file creation is a generic one, and the whole concept of a file is a generic one. A file is just a byte stream; what goes in the byte stream is unspecified at the file abstraction layer. As #AdamGent mentioned, Windows requires certain types of executable files to have certain extensions in order to be executed properly, but even in Windows, you can put executable code in non-executable-extensioned files, and you can put non-executable content in executable-extensioned files. There's no enforcement of file name / file content correspondence at the file layer.
All of that being said, it would often be a convenience if you could easily create a script in Unix that automatically has executable permission set. I've actually written a script to allow me to edit a new file in vim, and then have its permissions set to executable after write-quitting. The reason this potential convenience has not been standardized into a utility is likely related to the concern about security; you don't want people to accidentally make files executable, because that raises the risk of security holes.
You can always write your own program to create a file and make it executable, perhaps based on the extension of the file name.
Another thing that can be added here is that even shell scripts don't always need to be executable. For example, if you write a script that is only intended to be sourced from existing shell processes (via the source or classic . builtins), then the script does not need to be executable at all. Thus, there are cases where the file extension itself does not provide enough information to determine what the appropriate permissions are for the file.
There is nothing in the file name that says a file is even a script. Common practice perhaps says that .sh and .csh are scripts but there is no such rule.
What makes a file an executable script is the magic number in the beginning of the file. The magic number #! (the shebang, which has many other names as well) means the file is a script. For example
#!/bin/bash
When you try to execute the file (it must then also be set to executable in the permissions), the Linux kernel looks at the first two bytes of the file and finds the magic number (#!). It then knows it is supposed to execute whatever comes after the Shebang with the name of the file as argument followed by the rest of the arguments passed.
If you type the following in the shell
blablabla.sh lol1 lol2
The shell recognizes that you are trying to execute something so it invokes the kernel
exec blablabla.sh lol1 lol2
The kernel finds the shebang, and it becomes
exec /bin/bash blablabla.sh lol1 lol2
By exec i mean one of the exec family system calls.
Other fun names for #! include sha-bang, hashbang, pound-bang, hash-exclam and hash-pling
Because the .sh script extension is completely arbitrary in Unix environments. Its only a convention. You can name your script whatever you like so long as it has an executable bit set. This is unlike Windows where I believe its required (.com, .exe, and I think .dll).
touch just changes the timestamp of the file. It again does not care what the file extension of the file is. In fact most tools in Unix do not care about file extension.
I have a program, let's call it exampleProg, in my /opt directory, and I want to run it from any directory, rather than just:
/opt/radFolder/exampleProg
This should be a simple task, I've done it several times before on different computers. I've searched around, and found instructions ranging from:
edit .bash
edit .bashrc
edit .profile (Another stackoverflow answer said that, while this worked at one time, it no longer functions.)
edit /etc/environment/
with PATH="$HOME/bin:$PATH:/opt/radFolder/:" or just adding the /opt/radFolder bit.
Yet none of them seem to work. The problem that I'm running into is that there doesn't seem to be a yet there doesn't seem to be a universally agreed-upon solution. I've tried so many that I think one of my changes has prevented the appropriate one from taking effect. Would someone help me put this to rest once and for all? Many thanks in advance.
I'm running ubuntu 14.04 LTS x64.
First, understand that writing things to those files does not mean everything is instantaneously, and globally, changed. In fact, nothing is changed until the file is sourced (via . or source), and even then, the environment changes apply only to the current shell (and subsequent created children, if export is used).
INVOCATION, near the top of man bash, spells out which files are automatically sourced when. To summarize:
~/.bashrc is read for new non-login, interactive shells, e.g., when you open a GUI terminal. On many systems, this file by default in turn sources /etc/bashrc.
/etc/profile, ~/.bash_profile, and ~/.profile are read by interactive login shells.
Adding to ~/.bashrc should be effective, but it will only work for subsequently invoked, interactive, non-login shells (and their children, if $PATH is exported). However, since it's prone to being sourced repeatedly, using it to add to an existing variable (as with $PATH) can produce repeated concatenations (see here).
An issue with the second category, .profile, is that if you use a GUI login, the display manager may not source it, but it logs you in, meaning, you never invoke a login shell and hence none of those is ever sourced. If this is the case, sourcing them from ~/.xsession should work (this has a system wide correlate in /etc/X11).
Hi I'm sure there is some way of doing what I want, but maybe I'm just attacking it the wrong way. Hope someone can help.
I have a dev box that I SSH in to from several other machines. In order to debug remotely I need to configure my debugger with my client machine's IP, which changes when I log in from different machines. I'm getting bored of doing this manually all the time so thought I'd try and automate it.
I'm creating a script that is automatically run upon SSH connection that will modify a configuration setting in a PHP ini file. The problem is the PHP ini files are all owned by root so I'm not sure how to modify those files if I'm just logging in as a normal user.
There's not really a security concern with my dev box so I could just change the owner of the ini file, but I wanted it to be more automated than that.
My current attempt is a python script located in my home dir, which is called from .bashrc when I connect via SSH. I don't see how I can gain root privileges from there, I am pretty new to linux though. I thought maybe there would be some other method I'm not aware of.
You have a file that is owned by root. You clearly need to either find a way to mark the file as modifiable by you; or a way for you to elevate your privileges so that you are allowed to modify it.
This leads to the two traditional unix approachs to doing this. They are:
To create a group with which to mark the file, ie. initdebug; chgrp/chmod the file so it has the initdebug group and is group writable; and, add yourself to the initdebug group so you can use the group write permission to modify the file.
To create a very small, audited binary executable (this won't work with a script) that will perform the specific modifications you desire (for simplicity I would suggest copying one of a selection of root owned PHP ini files into the right place). Then chown'ing the file so it is owned by root, and setting the suid bit on the executable so it will execute as root.
You can also combine the two approaches, either:
Not making yourself a member of the initdebug group or suid on the executable, but rather setting group of the executable to initdebug and setting its sgid bit; or,
Keeping the executable suid root but making it only executable by initdebug and therefore only executable by users added to that group.
The security trade off is in the ease/risk of privilege escalation should someone hack your account. If there is a stack/heap overflow or similar vulnerability in the executable and it is executing as root, then you are gone. If the PHP ini file can be modified to open a remote-vulnerability then if they can directly access the ini file you are gone.
As I suspect the latter is possible, you are probably best off with a small executable.
Note: As I alluded to above, unix does not acknowledge the s[ug]id bits on scripts (defined as anything using the #!... interpreter syntax). Hence, you will have to use something you can compile down to a binary. So that probably means either C, C++, Java(using gcj), ML, Scheme(mit), Haskell(ghc).
If you haven't done any C or C++ programming before, I would recommend one of the others as a suid binary is not a project with which to learn C/C++. If you don't know any of the other languages, I would recommend either ML or Java as the easiest to to write something small and simple.
(btw, http://en.wikipedia.org/wiki/List_of_compilers includes a list of alternative compilers you can use. Just make sure it compiles to native, not bytecode. As far as the OS is concerned a bytecode vm is just another interpreter).
you can do it with insert your user to sudoers file on mechine that you want to remote,
for the example you can see my blog.
this is the url : http://nanamo3lyana.blogspot.com/2012/06/give-priviledge-normal-user-as-root.html
and then on your automaticly script add sudo on your command.
i'm sorry my english not good.
I have some directories with a number of "hidden" files. One example of this is I'm in a source controlled sandbox and some of the files have not been checked out yet.
When I hit TAB, I'd like the option of seeing these files.
A similar question has been asked before: CVS Tab completion for modules under linux
The answers to that question summarize to: "Ubuntu's got that built in".
I don't have the option of switching to Ubuntu, but surely I can use the same mechanisms.
how can I hook into the TAB-completion feature of tcsh to add additional file Support for CVS, SVN and BitKeeper would all be useful.
More important than support for a specific source control system is the ability to control the returned list myself.
An acceptable solution would also be to use a key-binding other than TAB. (ctrl- perhaps)
From the manpage:
the complete builtin command can be used to tell the shell how to complete words other than filenames, commands and variables
might get you started
I do not know how to program in tcsh. But if you can, then you could look at the file named "bash_completion" from the archive (find the download link here.)
On line 1673 begins CVS completion code - and this might be portable to csh if you are familiar with the differences between bash/tcsh.
On my ubuntu machine, there is also a section for SVN completion (in /etc/bash_completion) that doesn't seem to be present in the maintainer's archive.
That's not Ubuntu-specific behavior, it's the bash-completion project.
You could use that, if you can switch from tcsh to bash.