Is there an operating system-specific way in Linux/Darwin/Windows, to restrict access to certain virtual memory pages to only one thread, so that when another thread tries to access it, the OS would intercept and report an error?
I'm trying to emulate the behavior of fork with multiple processes, where each process has its own memory except for some shared memory, mainly to avoid all programming errors where one worker would access memory belonging to another worker.
As a general proposition, this is not possible. The whole idea of threads is to have multiple streams of execution that share the same address. If you're a kernel mode kommando, you might be able to some up with some modification of the page tables that a thread uses to make pages inaccessible from usermode then unlocks them.
Related
For a Mac application I'm using an external (C++) library that has build in memory management. A drawback of that memory manager is that memory needs to be deleted on the same thread as the new call.
Currently I'm using GCD to run code concurrently, but I run into the problem that objects of that library get allocated on various threads and I can't correctly delete them.
Is there a way to call the delete operator on the original thread that called new? I realise that GCD wants to abstract the underlying threads away from me, but otherwise I've to write a custom GCD-like implementation where I have full control over the threads.
I recently stumbled upon the question above but I am not sure if I understand what it is asking.
How would one avoid the use of scheduling policies?
I would think that there isn't any other way...
Scheduling policy has nothing to do with the resource allocation! Processes are scheduled basically, and hence allocated resources as such.
From "Resource allocation (computer)" description on Wikipedia :-
When the user opens any program this will be counted as a process, and
therefore requires the computer to allocate certain resources for it
to be able to run. Such resources could have access to a section of
the computer's memory, data in a device interface buffer, one or more
files, or the required amount of processing power.
I don't know how you got confused between them. All the process would, at a time or another, get scheduled at any point of time; unless the CPU is an unfair one.
EDIT :
How would one avoid the use of scheduling policies?
If there are more than one user-process to be executed, then one has to apply the scheduling policy so that the processes get executed in some order. There has to be a queue to hold all the processes. See a different case in BareMetal OS below.
Then, there is BareMetal OS which is single address space OS.
Multitasking on BareMetal is unusual for operating systems in this day
and age. BareMetal uses an internal work queue that all CPU cores
poll. A task added to the work queue will be processed by any
available CPU core in the system and will execute until completion,
which results in no context switch overhead.
So, BareMetal OS doesn't use any scheduling policy, it is based on polling of the work-queue by the cores.
I am working on hardening a sandbox for student code execution. I think I'm satisfied that students can't share data on the file system or with signals because I've found express rules dictating those and they execute as different unprivileged users. However, I am having a really hard time looking at documentation to determine, when shared memory (or IPC more generally - queues or semaphores) is created, who can see that. If you create shared memory, can anyone on the same machine open it, or is there a way to control that? Does the control lie in the program that creates the memory, or can the sysadmin limit it?
Any process in the same ipc namespace can see and (potentially) access ipc objects created by other processes in the same ipc namespace. Each ipc object has the same user/group/other-rwx permissions as file system objects objects -- see the svipc(7) manual page.
You can create a new ipc namespace by using the clone(2) system call with the CLONE_NEWIPC flag. You can use the unshare(1) program to do a clone+exec of another program with this or certain other CLONE flags.
When a program with some theards, mutexes, shared data, file handles crash because of too much memory allocation, which all resources are freed. How do you recover?
If you mean, how do you go back and free up the resources that were allocated by the now-crashed process, well, you don't have to.
When the process exit(2)'s or dies by a signal all of the OS-allocated resources will be retrieved. This is the kernel's job.
You recover by checking the results of resource acquisition functions and not allowing unchecked errors to occur in the first place.
All resources that belongs to the process are cleaned up.
The only exceptions would be the sysv shared memory/message queues/semaphores - which although might have been created by the process are not owned by it.
Taking CPU affinity into account, will such an environment be useful with threading? Or will there be a performance degradation in such a system, if multiple users login and spawn multiple kernel and user threads?
When you say "taking CPU affinity into account" - are you saying that all processes have CPU affinity in this hypothetical system? Or is that just as one extra possible bit of information?
Using multiple threads will slow things down a bit if the system is already loaded (so there are more runnable threads than cores) but if there are often times where there are only (say) 2 users and 4 cores available, threading may help.
Another typical use for threads is to do something "in the background" whether that's explicitly using threads or using async calls. At that point multi-threading can definitely give a benefit (e.g. a non-hanging UI) without actually using more than one core simultaneously for much of the time.