I notice that multiplexity of Process.executable is 0 to 1. So that means it's able to define a Process without having Executable. So what is use case of this?
Related
I want to count how many times systemcall was called.
I make wrapping function in C that hook "open" system call.
first I do was override open.c with openHooking.c
In that code, I added a code that changes shell variable to the original open.c content.
I thought that I could do it by declare environment variable in shell, and change it in C script.
But I realized it is impossible because child process can't change parent process.
I want to know how many times the system call was called
How can I do it?
strace can do this for you with either --summary, --summary-only, or doing the accounting yourself by analyzing its output!
https://man7.org/linux/man-pages/man1/strace.1.html
there are many flags, but perhaps simply
strace --follow-forks --summary ./a.out
We have a startup script for an application (Owned and developed by different team but deployments are managed by us), which will prompt Y/N to confirm starting post deployment. But the number of times it will prompt will vary, depends on changes in the release.
So the number of times it will prompt would vary from 1 to N (Might be even 100 or more than that).
We have automated the deployment and startup using Jenkins shell script jobs. But startup prompts number is hardcoded to 20 which might be more at sometime.
Could anyone please advise how number of prompts can be handled dynamically. We need to pass Y whenever there is pattern in the output "Do you really want to start".
Checked few options like expect, read. But not able to come up with a solution.
Thanks in advance!
In general, the best way to handle this is by (a) using a standard process management system, such as your distro's preferred init system; or, if that's not possible, (b) to adjust the script to run noninteractively (e.g., with a --yes or --noninteractive option).
Barring that, assuming your script reads from standard input and not the TTY, you can use the standard program yes and pipe it into the command you're running, like so:
$ yes | ./deploy
yes prints y (or its argument) over and over until it's killed, usually by SIGPIPE.
If your process is reading from /dev/tty instead of standard input, and you really can't convince the other team to come to their senses and add an appropriate option, you'll need to use expect for this.
I'm implementing a simple job scheduler, which spans a new process for every job to run. When a job exits, I'd like it to report the number of actions executed to the scheduler.
The simplest way I could find, is to exit with the number of actions as a return code. The process would for example exit with return code 3 for "3 actions executed".
But the standard (AFAIK) being to use the return code 0 when a process exited successfully, and any other value when there was en error, would this approach risk to create any problem?
Note: the child process is not an executable script, but a fork of the parent, so not accessible from the outside world.
What you are looking for is inter process communication - and there are plenty ways to do it:
Sockets
Shared memory
Pipes
Exclusive file descriptors (to some extend, rather go for something else if you can)
...
Return convention changes are not something a regular programmer should dare to violate.
The only risk is confusing a calling script. What you describe makes sense, since what you want really is the count. As Joe said, use negative values for failures, and you should consider including a --help option that explains the return values ... so you can figure out what this code is doing when you try to use it next month.
I would use logs for it: log the number of actions executed to the scheduler. This way you can also log datetimes and other extra info.
I would not change the return convention...
If the scheduler spans a child and you are writing that you could also open a pipe per child, or a named pipes or maybe unix domain sockets, and use that for inter process communication and writing the processed jobs there.
I would stick with conventions, namely returning 0 for success, expecially if your program is visible/usable around by other people, or anyway document well those decisions.
Anyway apart from conventions there are also standards.
I'm a new user of linux and am confused about task priorities of applications that run dynamically at run time.
Here's the scenario:
1. I create an application called myApplication and install it in one of the bin folders (/usr/sbin)
2. This task is not run until
a. I start it specifically from shell or
b. I call it from a script based on some event.
The application executes and terminates.
How do I know its priority?
Q2. Will it take default priority and nice value? I see that my application's nice value is 0 which I assume is default.
Q3. How can I find the priority of all such applications that are installed in one of the bin folders but are called run time and terminate after their desired work is done?
I thoroughly searched the forum before posting this query and I apologize if it has already been answered.
Many many thanks in advance.
Keshav
Programs don't generally have an associated nice value. The default is for spawned processes to inherit the parent's niceness, regardless of which program is being started. To change this you can use the nice or renice command-line utilities, or the setpriority() system call.
We have an stub that we are launching from inittab that execv's our process. (ARM Linux Kernel 2.6.25)
When testing the process it fails only if launched from inittab and execv'd. If launched on the command line it work perfectly, every time.
The process makes heavy use of SYS V IPC.
Are there any differences between the two launch methods that I should be aware of?
As Matthew mentioned it, it is probably an env variable issue. Try to dump yout env list before calling your program in both case - through the stub or 'by hand'.
BTW It could help a lot if you could provide more information why your program did crash. Log file ? core dump/gdb ? return value from execve ?
Edit:
Other checks: are you sure to pass exactly the same parameter list (if there are parameters)?
To answer your question , there is no differences between the 2 methods. Actually your shell fork() and finally call execve() to launch your process, feeding it with parameters you've provided by hand, and the environement variables you've set in your shell. Btw when launching your program through init it could launch it during an early stage of your machine startup. Are you sure everything is ready for the good running of your application at that point?
Could it be an issue of environment variables? If so, consider using execve or execle with an appropriate envp argument.
The environment variable suggestion is pretty good - specifically I'd check $PATH to make sure your dependent libraries are being found (if you have any). Another thing you could check is, are you running under the same uid/gid when run as inittab?
And if you replace your stub with a shell script ?
If it works from the command line, it should work from a shell script, and you can know wether it is your stub or the fact that it is in inittab.
Could it be a controlling tty issue ?
Another advantage of the shell script is you can edit it and strace your program to see where it fails
Was a mismatched kernel/library issue. Everything cleaned up after a full recompile.