clone: operation not permitted - linux

I am using isolate, an isolator to isolate the execution of another program using Linux Containers. It's very handy and it works very well locally on my computer (I can run fork bombs and infinite loops and it protects everything).
Now I'm trying to get this to work on an Ubuntu 12.04 server I have, but I'm having some difficulties with it. It's a fresh server too.
When I run:
sudo isolate --run -- mycommand
(mycommand I usually try python3 or something), I get:
clone: Operation not permitted
So, I dug up on the clone function (called like this in isolate.c):
box_pid = clone(
box_inside, // Function to execute as the body of the new process
argv, // Pass our stack
SIGCHLD | CLONE_NEWIPC | CLONE_NEWNET | CLONE_NEWNS | CLONE_NEWPID,
argv); // Pass the arguments
if (box_pid < 0)
die("clone: %m");
if (!box_pid)
die("clone returned 0");
box_keeper();
Here's the Return Value of the function clone:
On success, the thread ID of the child process is returned in the caller's thread of execution. On failure, -1 is returned in the caller's context, no child process will be created, and errno will be set appropriately.
And this is the error I'm getting:
EPERM Operation not permitted (POSIX.1)
And then I also found this:
EPERM CLONE_NEWNS was specified by a non-root process (process without CAP_SYS_ADMIN).
The clone function is indeed passing CLONE_NEWNS to run the program in a new namespace. I actually tried removing but I keep getting clone: Operation not permitted.
So, it all seems to point out to not having root privileges, but I actually ran the command as root (with and without sudo just to be sure), and also with a normal user in the sudoers group. None of that worked, but it works very well locally. Root privileges work for everything else but for some reason when I run this isolate program, it doesn't work.
I tried both with isolate in /usr/bin and running ./isolate in a local folder too.

I had this issue because I was trying to use isolate within a docker container.
Rerunning the container with the --privileged flag fixed it for me.

You can also pass --cap-add=SYS_ADMIN to docker run.
Or also --security-opt seccomp=unconfined
Or also provide your own white list of allowed system calls, see https://docs.docker.com/engine/security/seccomp/#pass-a-profile-for-a-container

Related

Stopping subprocess ran by exec.command in golang

This might seems like a similar question roaming around on the internet but it not as I didn't find any similar, so asking here.
The thing is, I have a go program named abc.go which contains two functions which are to run and stop someScript.sh script. Run() and stop() are being called at API hit. I am running this abc.go file using command sudo go run abc.go someFolder/someScript.sh, while passing someScript.sh path as argument. Instop(), I am saving the process-groupID and then killing the whole process-group.
But when I call run and then stop functions, it gives me this output
pid=5844 duration=13.667µs err=exec: already started
and doesn't actually stop the running docker container (I am checking using docker container ls -a ).
The someScript.sh file is:
#!/bin/bash
docker container run --rm --name someContainerName nginx
The abc.go file is:
func Run(){
someVar= true
execCMD = exec.Command("/bin/sh", "-c", commandFromTerminal)
output, err = execCMD.CombinedOutput()
fmt.Println("Output()=", bp.Output())
someVar= false
}
func Stop(){
execCMD.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
start := time.Now()
syscall.Kill(-execCMD.Process.Pid, syscall.SIGKILL)
err := execCMD.Run()
fmt.Printf("pid=%d duration=%s err=%s\n", execCMD.Process.Pid, time.Since(start),
err)
}
As per my understanding, it seems like docker command which is written in someScript.sh, didn't run the docker container as a subchild/grandchild of /bin/bash but rather ran it as a separate process which the code in my stop() is unable to actaully stop it
Below is the flow diagram which is according to my understanding where i think on calling abc.go, it internally calling /bin/bash, then running sudo as its child, further sudo has a subchildsomeScript.sh. And finally the docker, which is not running as any child/subchild of the above hierarchy, but as a different process.
My question finally is, how to stop this docker container on calling stop(). Or how to make this docker container run as a subchild of the hierarchy so that I can kill it using process-groupID method which I have used above.
PS: I have also tried
err := execCMD.Process.Kill()
if err != nil {
panic(err.Error())
}
execCMD.Process.Release()
but it too didn't help.
docker is just a client for the docker daemon. docker run simply sends a few HTTP requests to the daemon, and the daemon sets up the container and executes it.
So docker run is a grandchild of your Go program, but the nginx processes are descendants of the Docker daemon, and entirely unrelated to your Go program. Mind you, the docker daemon can even be on a different machine, in principle at least.
That being said,
Assigning SysProcAttr after a process has been started has no effect.
You're calling Run in Stop (very suspicious) and you cannot Run a process that has already been started, even after it terminated.
Sending SIGKILL gives docker run no chance to terminate the container. After fixing the other errors, it's possible that the docker daemon takes care of the cleanup due to the --rm flag (I forget how this works, exactly). If not, send SIGTERM instead.

Why sometimes init process read /dev/initctl return -1?

We use sysvinit to build our system. When I input poweroff command, sometimes system has no any response.
I did some investigation, and found that in the function check_init_fifo() of init, the line:
n = read(pipe_fd, &request, sizeof(request));
sometimes it return -1, and the error code is EAGAIN.
I modified the init code, and let it retry for 5 times when the error code is EAGAIN, but not work well.
Does anyone know why sometimes read /dev/initctl return -1? How to resolve this?
Thanks.
I finally find the root cause. There is a lxc container running in our system, and the devfs is shared between this container and host. The init processes of container and host can read the /dev/initctl, but sometimes the init of host is not the first one who read the /dev/initctl.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

Detect if have got sudo right info

Is it possible to detect that I have the right sudo when I run
node app.js
I hope that when I run the command
sudo node app.js
it will tell me that the app.js node is running with the right sudo.
I can't ensure this is correct since I currently have no way of testing it but if I remember correctly you should be able to use the getuid() function from the process package (Documentation).
(Note: This only works on POSIX platforms, which means no Windows)
This should return "root" when you run the command with super user permissions.
IMPORTANT You should never run a webserver like node with super user permissions. If you need the permissions for some setup work you should revert the granted root permissions by doing something like this at the end of your initialization work:
var uid = parseInt(process.env.SUDO_UID);
if (uid) process.setuid(uid);
I found the accepted answer confusing. Coming from that, and based on some personal testing, the following code works:
function isRoot() {
return !!process.env.SUDO_UID; // SUDO_UID is undefined when not root
}
as well as this:
function isRoot() {
return !process.getuid(); // getuid() returns 0 for root
}

Is there a way to disable the cvs init command for the cvsd daemon?

Is there a way to prevent users from doing 'cvs init'?
'cvs init' creates a new repository. The doc says it is a safe operation on an existing repository, since it does not overwrite any files. But the problem is, administrative files in CVSROOT will be changed.
For example, we have a CVSROOT/loginfo script that mails commit info to a mailing group. After doing cvs init on that repo, it is replaced by a 'clean' version.
We use cvs 1.12.13 on a linux box running as stand-alone server and connect mostly from windows using the pserver protocol.
Setting the rights in CVSROOT didn't help, because the cvsd daemon runs as root. (It needs to incorporate into the executing user).
Problem is, that some users not so familiar with cvs tried 'cvs init' instead of 'cvs import' to create a new module.
I'm assuming that you have sysadmin authority over the machines. You could provide a wrapper around the real CVS binary to prevent certain commands from running and store this wrapper in such a way that it gets picked up before the real CVS. It's a bit of a hack but in a pinch, it would work:
#!/bin/bash
REAL_CVS=/usr/bin/cvs
case $1 in
init)
echo "The use of $1 is restricted. Contact your CVS administrator"
exit 1
esac
$REAL_CVS $*`
An other option would be to recompile the CVS client to disable the init command. Take a look at:
http://cvs.savannah.gnu.org/viewvc/cvs/ccvs/src/client.c?revision=1.483&view=markup
It would be trivial to modify this function to print out something.
void
send_init_command (void)
{
/* This is here because we need the current_parsed_root->directory variable. */
send_to_server ("init ", 0);
send_to_server (current_parsed_root->directory, 0);
send_to_server ("\012", 0);
}

Resources