Reshare CLONE_NEWNS after unshare - linux

I have part of applicatoin which unshare CLONE_NEWNS to have private mount namespace in the process. Code is similar to unshare code snippet.
How to reverse effect of this unshare? I want to share the parent namespace again.

Get the original namespace fd before calling unshare(), then, after unshare(), you can switch back by calling setns(). If the original namespace is not altered by current process or by its parent process, you don't even need to get the fd beforeahead, you can get it anytime by open /proc/$ppid/ns/mnt (corresponding with your CLONE_NEWNS)

Related

"require" sub processes; how to stop them?

I want to send a class instance to a sub process that shall operate on the class and then later stop the process.
I have used the require for the module and sent the class instance as a parameter to an init function in the required module. This works as such, but it I want not to restart the complete program I cannot find a way to this.
I have limited experience from javascript. I did check the child_process functions but I newer got it to work. Also I tried something described here on stackoverflow aswell (see code).
const myChildProgram = require("./myModule");
myClassInst = new myClass();
myChildProgram.init(myClassInst); //initialize and run sub processes this command launches other async processes.
//later in time/code
//stop all processes generated after the myChildProgram.init()
delete require.cache[require.resolve('./myModule')]; //not working
Would like to be able to stop the processes generated from the myChildProgram.init() call
It appears that you have modules and subprocesses confused. A module is a block of javascript code that has been loaded into your current node process. There is no way to stop that code from the outside unless you kill your whole node process.
If you want a module to stop doing something it was doing, then the usual solution would be to export a function from that module that, when called, would execute code within the module to stop whatever it was doing.
We can only help more specifically if you show the actual code for the module and the operation that you want to stop.

NodeJS — Fork Child Process with function string instead of file

I've looked at the documentation for the fork method, and it only describes providing a file path to the child module file.
Does anyone know if it is possible (and undocumented) to pass in the child module directly instead of via a file? Point being, I would like to dynamically generate the module, then create a child process with it.
This would not be possible -- fork() creates a completely different process that do not share context or variables with its parent process.
One option you have would be to generate the module inside the forked process, and passing it the necessary arguments via the command line or via a temporary file so that your child can run:
const data = 'something;
var childProcess = child_process.fork(__dirname + '/worker', [something]);
You can then access the arguments from the child using process.argv[2].
One limitation of that approach is that you can only pass data types, and cannot call from the worker any function in the context of its parent. You would need for that some kind of RPC between the child and the parent, which is beyond the scope of this answer.

How to access parent global variable in child process nodejs

I have the following code:
import ChildProcess = require("child_process");
global.abc = "token";
ChildProcess.spawn("node", [path.join(process.cwd(), "./install-db.js")]);
install-db.js in this file I am not able to get global variable, How should I use global.abc in this child process
As child process is a separate entity, you can't excess your main process's global variable inside it.
Though there are ways to send data/inputs to child processes. You can use command line arguments to send data to child process.
Read more about passing arguments to child process: https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options

Migrating dir_proc_entry from kernel 3.1 to kernel 3.18

I'm migrating a kernel module from 3.1 to 3.18. struct dir_proc_entry definition was moved to fs/proc/internal.h. How do I use this structure now in the new version? When I tried to include internal.h I got an error that it doesn't exist.
fatal error: fs/proc/internal.h: No such file or directory
Is there something I'm missing to work with dir_proc_entry? I read that this structure was made opaque in 3.10. What is the proper way to work with this?
In my code for example I have:
static struct proc_dir_entry *proc01;
...
parent = proc01->parent;
What is the proper way to work with proc_dir_entry?
What I'm trying to do is EXACTLY this: dereferencing proc_dir_entry pointer causing compilation error on linux version 3.11 and above
I made the exact same modifications as the code listed on my own. The only changes are that I'm using newer/different kernel headers now.
Here is how ivyl rootkit works.
The kernel module initializes with __init rootkit_init(void).
Run both procfs_init or fs_init
Both of these functions replace the readdir (for kernels 3.10 and older) or iterate (for kernels 3.11 and newer) with a custom version. This is the hiding functionality of a rootkit. They work by making memory read/write replacing the function then making the memory read only.
procfs_init operates on the process filesystem. It creates a file that is read/write by everyone called rtkit. It replaces the original readdir (iterate) with the new one that hides rtkit from view.
fs_init operates on the filesystem in /etc. This is where the module is stored. In other words, it hides the executable code.
The code in procfs_init is what relies on proc_dir_entry structure. This code does the following in detail (line by line):
Creates an entry for the process "rtkit" that is read/write by everyone.
Error checking – if the process is not created return 0.
Get the parent process.
Error checking – if parent is null or the parent process is not "/proc" return 0.
Set the read function of the rtkit process – this just prints some information about what the rootkit is doing. A kind of help command.
Set the write function of the rtkit process. This is main function that brings everything together. It looks for the code "mypenislong" and changes to root. The user running this rootkit now has full root privileges. It also hides given processes and given modules as per the command given.
Get a file operations structure (file_operations) for the root process (proc_root)
From the file operations get the original readdir (iterate) function.
Set the proc_fops to read/write
Set the proc_fops iterate member to the new function of the rootkit (the one that hides functionality)
Set the proc_fops back to read only.
Return 1.
The code for procfs_init:
static int __init procfs_init(void)
{
//new entry in proc root with 666 rights
proc_rtkit = create_proc_entry("rtkit", 0666, NULL);
if (proc_rtkit == NULL) return 0;
proc_root = proc_rtkit->parent;
if (proc_root == NULL || strcmp(proc_root->name, "/proc") != 0) {
return 0;
}
proc_rtkit->read_proc = rtkit_read;
proc_rtkit->write_proc = rtkit_write;
//substitute proc readdir to our wersion (using page mode change)
proc_fops = ((struct file_operations *) proc_root->proc_fops);
proc_readdir_orig = proc_fops->iterate;
set_addr_rw(proc_fops);
proc_fops->iterate = proc_readdir_new;
set_addr_ro(proc_fops);
return 1;
}
Since the dir_proc_entry structure is now opaque, how do I replace the functionality of this code? I need the code to read/write processes so that the process can be hidden as required.
Edit: modified question title and removed extraneous statement. Added clarification on what I'm trying to do.
Edit: Added description of ivyl rootkit workings.

Accessing file handles across multiple threads in TCL

I am using tcl version 8.5.
I have created a Itcl class itcl::class C_LOG inside which I have defined few private methods and one of them is public method openLog {filename} {} inside which I am performing a file open/append operation,
if { [catch {open $filename a} logFileId ] } {
error $logFileId
}
Outside the class I have created a multithread program public method userInfo which will print the userinfo env value into the file already created above.
puts $logFileId $userinfo.
But I am experiencing a error can not find channel named fileXXXX
It seems that the issue is because I have created a file handle outside the thread and I am trying to access the same inside a thread, I'm not sure if this actually works, if yes,
kindly let me know how to carry the file handles/channels to inside a thread.
It's actually not that hard, you have to transfer the file descriptor to an other thread:
::thread::transfer $otherThread $logFileId
Once you did that, you can only access it from this other thread.
If you want to log from different threads, I suggest using an own thread and sending the data that it should log to it:
set logThread [::thread::create]
thread::transfer $logThread $logFileId
# And to log something:
thread::send -async $logThread [list puts $logFileId $userinfo]

Resources