Script use Heap Memory of JVM or System Memory - garbage-collection

For some background,
As per my knowledge, when we start a java application, JVM allocates a Heap space and a stack for the application. This heap is used to store all the objects created by the application.
My question is that if I call a Shell script from my java code, the memory that is used by script, will it be allocated from the JVM heap space or the system memory space will be used.

The system memory will be used.
Java will invoke a fork() system call which will duplicate the parent memory (current JVM memory in use) to be able to run the child (the command you are trying to run).
In general, when you execute a process, you must first fork() and then exec(). Forking creates a child process by duplicating the current process. Then, you call exec() to change the “process image” to a new “process image”, essentially executing different code within the child process. This is how you create new processes to execute other programs / scripts.
See:
Forking the JVM
Shell processes from Java and the infamous OutOfMemory

Related

Prevent child process writing shared memory?

My process makes a shared memory section as runtime stack(Co's stack run on the shared memory). Thus my process can restore the runtime in case it core dump.
But I meet a problem. I don't want child-process to write anything to stack memory which is actually the super-process's stack after a fork. I knew I just need to make the memory copy-on-write to the child process. So I must declare this shared memory to private, but this makes changes to it from super-process lost after a core dump restart.
Is there any mechanism in Linux that makes a shared memory does not inherit to the child process, but only to that process explicit attach to it? Maybe a mmap flag? or a system call to override the copy-on-write configuration to a specific memory section?
Thanks.

linux: munmap shared memory in on single call

If a process calls mmap(...,MAP_ANONYMOUS | MAP_SHARED,...) and forks N children, is it possible for any one of these processes (parent or descendants) to munmap() the memory for all processes in one go, thus releasing the physical memory, or does every of these processes have to munmap() individually?
(I know the memory will be unmapped on process exit, but the children won't exit yet).
Alternatively, is there a way to munmap memory from another process? I'm thinking of a call something like munmap(pid,...).
Or is there a way to achieve what I am looking for using non-anonymous mappings and performing an operation on the related file descriptor (e.g closing the file)?
My processes are performance sensitive, and I would like to avoid performing lots of IPC when it becomes known that the shared memory will no longer be used by anyone.
No, there is no way to unmap memory in one go.
If you don't need mapped memory in child processes at all, you may mark mappings with madvise(MADV_DONTFORK) before forking.
In emergency situations, you may invoke syscalls from inside external processes by using gdb:
Figure out PID of target process
List mapped memory with cat /proc/<PID>/maps
Attach to process using gdb: gdb -p <PID> (it will suspend execution of target process)
Run from gdb: call munmap(0x<address>, 0x<size>) for each region you need to unmap
Exit gdb (execution of process is resumed)
It must be obvious that if your process tries to access unmapped memory, it will receive SIGSEGV. So, you must be 100% sure what you are doing.

What happens to allocated memory of other threads when forking

I have a huge application that needs to fork itself at some point. The application is multithreaded and has about 200MB of allocated memory. What I want to do now to ensure that the data allocated by the process wont get duplicated is to start a new thread and fork inside of this thread. From what I have read, only the thread that calls fork will be duplicated, but what will happen to the allocated memory? Will that still be there? The purpose of this is to restart the application with other startup parameters, when its forked, it will call main with my new parameters, thus getting hopefully a new process of the same program. Now before you ask: I cannot assure that the binary of that process will still be in the same place as when I started the process, otherwise I could just fork and exec whats in /proc/self/exe.
Threads are execution units inside the big bag of resources that a process is. A process is the whole thing that you can access from any thread in the process: all the threads, all the file descriptors, all the other resources. So memory is absolutely not tied to a thread, and forking from a thread has no useful effect. Everything still needs to be copied over since the point of forking is creating a new process.
That said, Linux has some tricks to make it faster. Copying 2 gigabytes worth of RAM is neither fast or efficient. So when you fork, Linux actually gives the new process the same memory (at first), but it uses the virtual memory system to mark it as copy-on-write: as soon as one process needs to write to that memory, the kernel intercepts it and allocates distinct memory so that the other process isn't affected.

Gdb on multiple instance

I have multiple instances of a particular process running on my system . At some point during the process execution, some of the internal data structures gets overwritten with invalid data. This happens on random instances at random intervals. Is there a way to debug this other than by setting memory access breakpoints?. Also, is it possible set memory access breakpoint on all these process simultaneously without starting a separate instance of gdb for each process?. The process runs on x86_64 linux system with 2.6 kernel.
If you haven't already done so, would recommend using valgrind (http://valgrind.org). It can detect many types of memory bugs including memory over/under runs, memory leaks, double frees, etc.
Also, is it possible set memory access break-point on all these process simultaneously without starting a separate instance of gdb for each process?
I don't think so that using gdb you can set breakpoints for all the processes in one go. According to me, you have separately attach each process and set the breakpoints.
For memory errors, valgrind is much more useful than GDB.
Assuming the instances you are talking about are forked or spawned from a single parent, you don't need separate instances of valgrind.
Just use valgrind --trace-children=yes
See http://man7.org/linux/man-pages/man1/valgrind.1.html
As to your question on GDB, an instance can debug one process at a time only.
You can only debug one process per gdb session. If your program forks, gdb follows the parent process if no other options to set follow-fork-mode was given.
see: http://www.delorie.com/gnu/docs/gdb/gdb_26.html
If you have memory problems it is even possible to run valgrind in combination with gdb or use some other memory debugging library like efence. Efence replaces some library calls e.g. malloc/free with own functions. After that efence and also valgrind use the mmu to catch invalid memory access. This is typically done by adding some space before and after each allocated memory block. If this spare memory is accessed by your application the library ( efence ) or valgrind stops execution. In connection with gdb you will be pointed to the source line which access the forbidden memory area.
Having multiple processes needs multiple instances of gdb which is in practive no real problem.

How to avoid shared memory leaks

I'm using shared memory between 2 processes on Suse Linux and I'm wondering how can I avoid the shared memory leaks in case one process crashes or both. Does a leak occur in this case? If yes, how can I avoid it?
You could allocate space for two counters in the shared memory region: one for each process. Every few seconds, each process increments its counter, and checks that the other counter has been incremented as well. That makes it easy for these two processes, or an external watchdog, to tear down the shared memory if somebody crashes or exits.
If the subprocess is a simple fork() from the parent process, then mmap() with MAP_SHARED should work.
If the subprocess does an exec() to start a different executable, you can often pass file descriptors from shm_open() or a similar non-portable system call (see Is there anything like shm_open() without filename?) On many operating systems, including Linux, you can shm_unlink() the file descriptor from shm_open() so it doesn't leak memory when your processes die, and use fcntl() to clear the close-on-exec flag on the shm file descriptor so that your child process can inherit it across exec. This is not well defined in the POSIX standard but it appears to be very portable in practice.
If you need to use a filename instead of just a file descriptor number to pass the shared memory object to an unrelated process, then you have to figure out some way to shm_unlink() the file yourself when it's no longer needed; see John Zwinck's answer for one method.

Resources