How to run a script when accessing a specific path? - linux

I have some path I have to access, which is the result of mounting.
I would like the mounting to be automatic, via a script, and I want that script to run just before an error is thrown from not being able to access the path.
For example, assume the script is
echo scripting!
mkdir -p /non_existing_path
and I want it to run when trying to access (in any way) to the path /non_existing_path.
So when I do for example
cd /non_existing_path
or
touch /non_existing_path/my_file.txt
It would always succeed, with the output scripting!. In reality, the script would be more elaborated than that.
Is this possible at all?

Yes, the important case is that 3rd parties (such as a new C program, command line, or other scripts) that would call for example cd should also be affected, and a call by them to cd as they normally would, should invoke the hooked script beforehand.
Out of kernel:
Write a fuse filesystem that would mount on top of other filesystem, that would upon open() syscall run fork()+execve() a custom script.
In kernel:
Write kernel filesystem that would expose /proc/some/interface and would create a filesystem "on-top" of underlying existing handle. Such kernel module would execute a special command upon open() system call and forward all others. In open() system call, the kernel would write some data to /proc/some/interface and wait for an answer. After receiving the answer, open() syscall would continue.
Write a user-space demon that would for example poll() on events on a /proc/some/interface waiting for events, then read() the events, parse them and execute custom script. Then after the script completion, it would write to /proc/some/interface on the same file descriptor to notify the kernel module that the operation has compleated.

Why don't you use autofs?
autofs is a program for automatically mounting directories on an as-needed basis.
https://help.ubuntu.com/community/Autofs

Not sure I understand.
Do you want the script to run even if the path is not accessible?
Do you want the script to run only if the path is not accessible?
Is the script supposed to mount the "not accessible" path?
In any case, I think you should just use an if else statement
The script would look like:
#!/bin/bash
if [ -d "/non_existing_path" ]
then
bash $1
else
bash $1
mkdir -p /non_existing_path
fi
Lets assume this script's name is "myScript.sh" and the external script's name is "extScript.sh". You would call:
bash myScript.sh /home/user/extScript.sh
This script will check if path exist.
If yes, execute bash /home/user/extScript.sh
If no, execute bash /home/user/extScript.sh and mkdir...
Again, I'm not sure to get you goal, but you can adapt it to your needs.

Related

linux file access read/write by root, execute by all

I'm trying to create a shell script that can only be read/written by root but can be executed by everyone. I created a file test.sh, set ownership to "chown root:me test.sh" and set permissions to "chmod 711 test.sh", hoping this would do the trick. However, this results in a file that always needs sudo in order to execute. Is it possible to edit the rights such that anyone (without using sudo) can execute the script, but only root (using sudo) can read/write the file?
this is not possible to be achieved, at least with shell scripts.
In fact, at the moment of the execution, the shell program (I presume Bash) needs to read the content of the shell file and the process runs with your user name and permissions.
Having said this, the BASH program (ZSH, SH or any other shell follow the same rules) needs to be able to read the content of the file and this can be achieved only by granting read privileges +r. So, the bare minimum would be a 755 permission model.
An alternative is to run an actual program which does the job and wouldn't require read permission in order to be executed. But this is a totally different pattern.
This response explains it as well.
https://unix.stackexchange.com/questions/34202/can-a-script-be-executable-but-not-readable

Why do we need execution permission although we can run any script without it using "bash script file"?

I am wondering when and why do we need execution permission in linux although we can run any script without execute permission when we execute that script using the syntax bellow?
bash SomeScriptFile
Not all programs are scripts — bash for example isn't. So you need execute permission for executable programs.
Also, when you say bash SomeScriptFile, the script has to be in the current directory. If you have the script executable and in a directory on your PATH (e.g. $HOME/bin), then you can run the script without the unnecessary circumlocution of bash $HOME/bin/SomeScriptFile (or bash ~/bin/SomeScriptFile); you can simply run SomeScriptFile. This economy is worth having.
Execute permission on a directory is somewhat different, of course, but also important. It permits the 'class of user' (owner, group, others) to access files in the directory, subject to per-file permissions also allowing that.
Executing the script by invoking it directly and running the script through bash are two very different things.
When you run bash ~/bin/SomeScriptFile you are really just executing bash -- a command interpreter. bash in turns load the scripts and runs it.
When you run ~/bin/SomeSCriptFile directly, the system is able to tell this file is a script file and finds the interpreter to run it. There is a big of magic invoking the #! on the first line to look for the right interpreter.
The reason we run scripts directly is that the user (and system) couldn't know or care of the command we are running is a script or a compiled executable.
For instance, if I write a nifty shell script called fixAllIlls and later I decide to re-write it in C, as long a I keep the same interface, the users don't have to do anything different.
To them, it is just a program to run.
edit
The operating system checks permissions first for several reasons:
Checking permissions is faster
In the days of old, you could have SUID scripts, so one needed to check the permission bits.
As a result, it was possible to run scripts that you could not actually read the contents of. (That is still true of binaries.)

shell script run when I am root but I get a permission denied when it is invoked from a Makefile (still as root)

I need to run a Make script that invokes a shell script.
I can run the shell script directly as root but when running make on the makefile (still as root) make is denied permission to run the same shell script?
The offending line in the Makefile is that one:
PLATFORM=$(shell $(ROOT)/systype.sh)
I could go in and hardcode the value of every PLATFORM variable of every Makefile scrip on the system but that would be pointless fix, I'd like to understand why there is that Permission Denied error:
make[1]: execvp: ../systype.sh: Permission denied
PS: The content of the shell script is not the issue even if the shell script only contain ls or echo linux the Permission is Denied to the Make utility to run the shell script.
PS: I am not a make expert by an mean so if the explanation is related to Make please be as specific as you can.
In your comments above you say when you "run it manually" you use . scriptname.sh, is that correct? You use . followed by scriptname.sh?
That does not run the script, that sources the script. Your statement that scriptname.sh will execute with and without the x permission since it is a shell script is wrong. You can source the script if you have read permissions. But you cannot execute the script unless you have execute permissions.
"Sourcing" means that a new shell is not started: instead your current shell (where you type that command) reads the contents of the script and runs them just as if you'd typed them in by hand, in the current shell. At the end all the side-effects (directory changes, variable assignments, etc.) that were performed in that script are still available in your current script.
"Executing" means that the script is treated like a program, but the program is a new shell that's started, which then reads the contents of the script and executes it. Once the script ends the shell exits and all side-effects are lost.
The $(shell ...) function in make will not source your script (unless you also use . there, which you did not). It will try to run your script. The error you show implies that either systype.sh did not have the execution bit set, or else that it had an invalid #! line. There's no other explanation I can think of.
If sourcing the file really does what you want then why not just use the same method in $(shell ...) that you use in your own personal use:
PLATFORM=$(shell . $(ROOT)/systype.sh)
If changing the user permission didn't work, are you sure that whatever user owns the script is the same user you're using to invoke make? You say you're "running as root"; is the script owned by root? Or is it owned by you and you're running sudo make or similar?
I don't know why you don't just use:
chmod +x systype.sh
and call it a day.
Adding execution permission to the file Group rather that the file User fixed the issue.
PS: I wonder why? It seems the Make utility run shell scripts not with the same user that started Make...

How to exit a chroot inside a perl script?

While writing a perl script intended to fully automate the setup of virtual machines (Xen pv) I hit a small maybe very simple problem.
Using perl's chroot function I do my things on the guest file system and then I need to get back to my initial real root. How the hell I do that?
Script example:
`mount $disk_image $mount_point`;
chdir($mount_point);
chroot($mount_point);
#[Do my things...]
#<Exit chroot wanted here>
`umount $mount_point`;
#[Post install things...]
I've tried exit; but obviously that exit the whole script.
Searching for a way to exit the chroot I've found a number of scripts who aim to exit an already setup chroot (privilege escalation). Since I do the chroot here theses methods do not aplies.
Tried some crazy things like:
opendir REAL_ROOT, "/";
chdir($mount_point);
chroot($mount_point);
chdir(*REAL_ROOT);
But no go.
UPDATE
Some points to consider:
I can't split the script in multiple files. (Silly reasons, but really, I can't)
The chrooted part involve using a lot of data gathered earlier by the script (before the chroot), enforcing the need of not lunching another script inside the chroot.
Using open, system or backticks is not good, I need to run commands and based on the output (not the exit code, the actual output) do other things.
Steps after the chroot depends on what was done inside the chroot, hence I need to have all the variables I defined or changed while inside, outside.
Fork is possible, but I don't know a good way to handle correctly the passing of informations from and to the child.
The chrooted process() cannot "unchroot" itself by exiting (which would just exit).
You have to spawn a children process, which will chroot.
Something along the lines of the following should do the trick:
if (fork())
{
# parent
wait;
}
else
{
# children
chroot("/path/to/somewhere/");
# do some Perl stuff inside the chroot...
exit;
}
# The parent can continue it's stuff after his chrooted children did some others stuff...
It stills lacks of some error checking thought.
You can't undo a chroot() on a process - that's the whole point of the system call.
You need a second process (a child process) to do the work in the chrooted environment. Fork, and have the child undergo the chroot and do its stuff and exit, leaving the parent to do the cleanup.
Try spawning a child process that does the chroot, e.g. with system or fork depending on your needs, and waiting for the child to return the main program continues.
This looks like it might be promising:
Breaking Out of a Chroot Jail Using PERL
Save the original root as the current working directory or as a file descriptor:
chdir "/";
chroot "/mnt";
# Do something
chroot ".";
OR
open DIR, "<", "/";
chroot "/mnt";
# Do something
chdir DIR;
chroot ".";
close DIR;

How does the shell know which directory it's in?

I have been trying to figure out how a shell knows which directory you're currently in. I know there is an environment variable $PWD but when I try changing it manually, it changes what my shell shows at the prompt but commands like ls and cd are unaffected.
cd is an internal shell command so I can understand it might use info stored within the shell memory, but ls is external and yet running ls without anything will give me whatever directory I was originally in regardless what I do to $PWD.
Each process has its own individual current working directory which the Linux system tracks. This is one of the pieces of information the OS manages for each process. There is a system call getcwd() which retrieves this directory.
The $PWD environment variable reflects what getcwd() was the last time the shell checked, but changing it does not actually change the current directory. To do that the shell would have to call chdir() when $PWD changes, which it does not do.
This also is the reason cd has to be a shell built-in. When you run a sub-process that child process gets its own working directory, so if cd were an executable then its calls to chdir() would be useless as that would not change its parent's working directory. It would only be changing its own (short-lived) working directory. Hence, cd is a shell built-in to avoid a sub-process being launched.
The shell sets that variable, but stores the knowledge internally (which is why you can't make cd an external program, it must be a built-in). The shell prompt is composed just before it is displayed each time, and you have specified using $PWD in yours, so the shell reads that in.
Remember: the shell is just a program, like any other program. It can---and does---store things in variables.
As AndiDog and John point out unix-like systems (i.e. including linux) actually maintains the working directory for each process through a set of system calls. The storage is still process local, however.
The Linux kernel stores the current directory of each process. You can look it up in the /proc filesystem (for example, "/proc/1/cwd" for the init process).
The current directory can be changed with the chdir syscall and retrieved with getcwd.
The current directory is a property of a running program (process) that gets inherited by processes created by that process. Changing the current directory is made via an operating system call. The shell maps the cd operation to that system call. When you write an external program like ls, that program inherits the current directory.
The $PWD variable is how the shell shows you the current directory for you to use it as a variable if you need it. Changing it does not have effect in the real current directory of the shell itself.
You (OP) launch ls via your command shell, and any process you launch, the shell launches in the context of its current working directory. So, each process you launch has its own $PWD variable (in a way).

Resources