How to exit a chroot inside a perl script? - linux

While writing a perl script intended to fully automate the setup of virtual machines (Xen pv) I hit a small maybe very simple problem.
Using perl's chroot function I do my things on the guest file system and then I need to get back to my initial real root. How the hell I do that?
Script example:
`mount $disk_image $mount_point`;
chdir($mount_point);
chroot($mount_point);
#[Do my things...]
#<Exit chroot wanted here>
`umount $mount_point`;
#[Post install things...]
I've tried exit; but obviously that exit the whole script.
Searching for a way to exit the chroot I've found a number of scripts who aim to exit an already setup chroot (privilege escalation). Since I do the chroot here theses methods do not aplies.
Tried some crazy things like:
opendir REAL_ROOT, "/";
chdir($mount_point);
chroot($mount_point);
chdir(*REAL_ROOT);
But no go.
UPDATE
Some points to consider:
I can't split the script in multiple files. (Silly reasons, but really, I can't)
The chrooted part involve using a lot of data gathered earlier by the script (before the chroot), enforcing the need of not lunching another script inside the chroot.
Using open, system or backticks is not good, I need to run commands and based on the output (not the exit code, the actual output) do other things.
Steps after the chroot depends on what was done inside the chroot, hence I need to have all the variables I defined or changed while inside, outside.
Fork is possible, but I don't know a good way to handle correctly the passing of informations from and to the child.

The chrooted process() cannot "unchroot" itself by exiting (which would just exit).
You have to spawn a children process, which will chroot.
Something along the lines of the following should do the trick:
if (fork())
{
# parent
wait;
}
else
{
# children
chroot("/path/to/somewhere/");
# do some Perl stuff inside the chroot...
exit;
}
# The parent can continue it's stuff after his chrooted children did some others stuff...
It stills lacks of some error checking thought.

You can't undo a chroot() on a process - that's the whole point of the system call.
You need a second process (a child process) to do the work in the chrooted environment. Fork, and have the child undergo the chroot and do its stuff and exit, leaving the parent to do the cleanup.

Try spawning a child process that does the chroot, e.g. with system or fork depending on your needs, and waiting for the child to return the main program continues.

This looks like it might be promising:
Breaking Out of a Chroot Jail Using PERL

Save the original root as the current working directory or as a file descriptor:
chdir "/";
chroot "/mnt";
# Do something
chroot ".";
OR
open DIR, "<", "/";
chroot "/mnt";
# Do something
chdir DIR;
chroot ".";
close DIR;

Related

How to run a script when accessing a specific path?

I have some path I have to access, which is the result of mounting.
I would like the mounting to be automatic, via a script, and I want that script to run just before an error is thrown from not being able to access the path.
For example, assume the script is
echo scripting!
mkdir -p /non_existing_path
and I want it to run when trying to access (in any way) to the path /non_existing_path.
So when I do for example
cd /non_existing_path
or
touch /non_existing_path/my_file.txt
It would always succeed, with the output scripting!. In reality, the script would be more elaborated than that.
Is this possible at all?
Yes, the important case is that 3rd parties (such as a new C program, command line, or other scripts) that would call for example cd should also be affected, and a call by them to cd as they normally would, should invoke the hooked script beforehand.
Out of kernel:
Write a fuse filesystem that would mount on top of other filesystem, that would upon open() syscall run fork()+execve() a custom script.
In kernel:
Write kernel filesystem that would expose /proc/some/interface and would create a filesystem "on-top" of underlying existing handle. Such kernel module would execute a special command upon open() system call and forward all others. In open() system call, the kernel would write some data to /proc/some/interface and wait for an answer. After receiving the answer, open() syscall would continue.
Write a user-space demon that would for example poll() on events on a /proc/some/interface waiting for events, then read() the events, parse them and execute custom script. Then after the script completion, it would write to /proc/some/interface on the same file descriptor to notify the kernel module that the operation has compleated.
Why don't you use autofs?
autofs is a program for automatically mounting directories on an as-needed basis.
https://help.ubuntu.com/community/Autofs
Not sure I understand.
Do you want the script to run even if the path is not accessible?
Do you want the script to run only if the path is not accessible?
Is the script supposed to mount the "not accessible" path?
In any case, I think you should just use an if else statement
The script would look like:
#!/bin/bash
if [ -d "/non_existing_path" ]
then
bash $1
else
bash $1
mkdir -p /non_existing_path
fi
Lets assume this script's name is "myScript.sh" and the external script's name is "extScript.sh". You would call:
bash myScript.sh /home/user/extScript.sh
This script will check if path exist.
If yes, execute bash /home/user/extScript.sh
If no, execute bash /home/user/extScript.sh and mkdir...
Again, I'm not sure to get you goal, but you can adapt it to your needs.

Can I run "cd" in a different xterm process?

RH6. Is it possible to issue, for example, a cd command in a running xterm process FROM a different process? I know the pid of the existing xterm proc. I actually want to "echo" a message first, and then cd. Something like...
echo "Your time in this workarea has expired. You are being sent home"
cd ~
It would be great if I could do this as a different user somehow (not the owner of the target proc) (I am not and cannot be root). But if that is not possible, perhaps having the target xterm owner create an executable which wraps these commands inside, and then setting the sticky bit on the executable might work when the 2nd proc goes to run it. Not sure if lint checking will catch this as some sort of foul.
I would just make this a comment, but I don't have enough reputation. But I think this might be on the right track:
https://serverfault.com/questions/178457/can-i-send-some-text-to-the-stdin-of-an-active-process-running-in-a-screen-sessi

What happens when you execute "ls" in Bash

Can some one provide me a detailed description of what happens when you execute the "ls" command in linux. What system calls are used? What does the file system do? Obviously depending on which file system is used. Is someone can provide an in depth discussion on this topic or point me to some good resources that would be great! Thanks!
Bash as command interpreter checks whether there such special words in it's own language: shell keywords or shell built-ins.
ls isn't among shell keywords, then it checks aliases and replace alias with it's value, most likely there should be something like that: ls='ls --color=auto'
it looks for ls executable in paths specified by PATH env variable. Usually it's /bin/ls
It forks (fork()) new process and execs it's code (exec()). Process env is inherited from parent process to new "ls" process.
New process becomes session leader and starts working on foreground (bash is moved to background)
ls process loads shared libraries from LD_PATHs (ldd /bin/ls)
it executes lots of systemcalls, you can check by strace, and the main part I believe are openat() and getdents() first opens directory and second reads entries inside there.
prints output and exits, sends wait() signal and parent process bash terminates it completely.
current process (we call parent process, or parent) find ls in $PATH variable,eg /usr/bin/ls
parent(process) fork a child( process), and pass all enviroment, child process image is /usr/bin/ls
without arguments, so child find env PWD, eg /foo/bar and excute (/usr/bin/ls /foo/bar)
child process output , exit
parent become interactive again

Running scripts from Perl CGI programs with root permissions

I have a Perl CGI that is supposed to allow a user to select some files from a filesystem, and then send them via Rsync to a remote server. All of the HTML is generated by the Perl script, and I am using query strings and temp files to give the illusion of a stateful transaction. The Rsync part is a separate shell script that is called with the filename as an argument (the script also sends emails and a bunch of other stuff which is why I haven't just moved it into the Perl script). I wanted to use sudo without a password, and I setup sudoers to allow the apache user to run the script without a password and disabled requiretty, but I still get errors in the log about no tty. I tried then using su -c scriptname, but that is failing as well.
TD;DR Is it awful practice to use a Perl CGI script to call a Bash script via sudo, and how are you handling privilege escalation for Perl CGI scripts? Perl 5.10 on Linux 2.6 Kernel.
Relevant Code: (LFILE is a file containing the indexes for the array of all files in the filesystem)
elsif ( $ENV{QUERY_STRING} =~ 'yes' ) {
my #CMDLINE = qw(/bin/su -c /bin/scriptname.sh);
print $q->start_html;
open('TFILE', '<', "/tmp/LFILE");
print'<ul>';
foreach(<TFILE>) {
$FILES[$_] =~ s/\/.*\///g;
print "Running command #CMDLINE $FILES[$_]";
print $q->h1("Sending File: $FILES[$_]") ; `#CMDLINE $FILES[$_]` or print $q->h1("Problem: $?);
However you end up doing this, you have to be careful. You want to minimise the chance of a privilege escalation attack. Bearing that in mind….
sudo is not the only way that a user (or process) can execute code with increased privileges. For this sort of application, I would make use of a program with the setuid bit set.
Write a program which can be run by an appropriately-privileged user (root, in this case, although see the warning below) to carry out the actions which require that privilege. (This may be the script you already have, and refer to in the question.) Make this program as simple as possible, and spend some time making sure it is well-written and appropriately secure.
Set the "setuid bit" on the program by doing something like:
chmod a+x,u+s transfer_file
This means that anyone can execute the program, but that it runs with the privileges of the owner of the program, not just the user of the program.
Call the (privileged) transfer program from the existing (non-privileged) CGI script.
Now, in order to keep required privileges as low as possible, I would strongly avoid carrying out the transfer as root. Instead, create a separate user who has the necessary privileges to do the file transfer, but no more, and make this user the owner of the setuid program. This way, even if the program is open to being exploited, the exploiter can use this user's privileges, not root's.
There are some important "gotchas" in setting up something like this. If you have trouble, ask again on this site.

How does the shell know which directory it's in?

I have been trying to figure out how a shell knows which directory you're currently in. I know there is an environment variable $PWD but when I try changing it manually, it changes what my shell shows at the prompt but commands like ls and cd are unaffected.
cd is an internal shell command so I can understand it might use info stored within the shell memory, but ls is external and yet running ls without anything will give me whatever directory I was originally in regardless what I do to $PWD.
Each process has its own individual current working directory which the Linux system tracks. This is one of the pieces of information the OS manages for each process. There is a system call getcwd() which retrieves this directory.
The $PWD environment variable reflects what getcwd() was the last time the shell checked, but changing it does not actually change the current directory. To do that the shell would have to call chdir() when $PWD changes, which it does not do.
This also is the reason cd has to be a shell built-in. When you run a sub-process that child process gets its own working directory, so if cd were an executable then its calls to chdir() would be useless as that would not change its parent's working directory. It would only be changing its own (short-lived) working directory. Hence, cd is a shell built-in to avoid a sub-process being launched.
The shell sets that variable, but stores the knowledge internally (which is why you can't make cd an external program, it must be a built-in). The shell prompt is composed just before it is displayed each time, and you have specified using $PWD in yours, so the shell reads that in.
Remember: the shell is just a program, like any other program. It can---and does---store things in variables.
As AndiDog and John point out unix-like systems (i.e. including linux) actually maintains the working directory for each process through a set of system calls. The storage is still process local, however.
The Linux kernel stores the current directory of each process. You can look it up in the /proc filesystem (for example, "/proc/1/cwd" for the init process).
The current directory can be changed with the chdir syscall and retrieved with getcwd.
The current directory is a property of a running program (process) that gets inherited by processes created by that process. Changing the current directory is made via an operating system call. The shell maps the cd operation to that system call. When you write an external program like ls, that program inherits the current directory.
The $PWD variable is how the shell shows you the current directory for you to use it as a variable if you need it. Changing it does not have effect in the real current directory of the shell itself.
You (OP) launch ls via your command shell, and any process you launch, the shell launches in the context of its current working directory. So, each process you launch has its own $PWD variable (in a way).

Resources