One file input to two program in script - linux

Hi I have a script that run two program
#Script file
./prog1
./prog2
prog1 is a C program
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv){
printf("prog1 running\n");
int tmp;
scanf("%d", &tmp);
printf("%d\n", tmp+10);
printf("prog1 ended\n");
return 0;
}
prog 2 is a C program as well
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv){
printf("prog2 running\n");
int tmp;
scanf("%d\n", &tmp);
printf("%d\n", tmp+10);
printf("prog2 ended\n");
return 0;
}
I run the command
./script < file
where file is
123
456
The output is
prog1 running
133
prog1 ended
prog2 running
10
prog2 ended
It seems like prog2 did not get the input from file, what is happening under the hood?
Will it be possible that prog2 took "\n" instead of a number?

Your script should be this:
#!/bin/bash
exec 3<&1
tee >(./prog2 >&3) | ./prog1
This use the tee command to duplicate stdin and the recent >() bash feature to open a temporary filedescriptor. (the use of filedesriptor 3 is done to split the stdout without parallelism).
See this answer to read the whole story.

scanf reads buffered input. So when your first program reads from stdin, it speculatively reads ahead all the available input to make future reads from stdin faster (through avoiding having to make so many system calls). When the second program runs, there's no input left, and (since you failed to check the result of scanf()) you end up with 0 in tmp.
You should be able to modify the buffering strategy in your application (at the expense of speed) using the setvbuf() standard function.

Related

Real parallelism in Linux shell

I am trying to have real parallelism on Linux shell, but I can't achieve it.
I have two programs. Allones, that only prints '1' character, and allzeros, that only prints 0 characters.
When I execute "./allones & ./allzeros &", I get big prints of '0's, and big prints of '1's, that mix in big chunks (e.g. 1111....111000...0000111...111000...000"). My processor has 8 cores.
However, when I executed my own program on a multi-core FPGA (with no OS), (If I distribute programs on different cores) I get something like "011000101000011010...".
How can I run it on Linux to get a result similar to what I get on a multi-core FPGA?
Sounds like you're experiencing libc's default line buffering:
Here's a test program spam.c:
#include <stdio.h>
int main(int argc, char** argv) {
while(1) {
printf("%s", argv[1]);
}
}
We can run it with:
$ ./spam 0 & ./spam 1 & sleep 1; killall spam
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111(...)000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000(...)
On my systems, each block is exactly 1024 bytes long, strongly hinting at a buffering issue.
Here's the same code with a fflush to prevent buffering:
#include <stdio.h>
int main(int argc, char** argv) {
while(1) {
printf("%s", argv[1]);
fflush(stdout);
}
}
This is the new output:
100111001100110011001100110011001100110011100111001110011011001100110011001100110011001100110011001100110011001100110011001100011000110001100110001100100110011001100111001101100110011001100110011001100110000000000110010011000110011

Difficulty in using execve

I am trying to execute "word count" command on file given by absolute path - "/home/aaa/xxzz.txt" . I have closed the stdin so as to take input from file but the program doesn't give any output .
Also if I add some statement after "execve" command, it is also getting executed . Shouldn't the program exit after execve ?
int main()
{
char *envp[]={NULL };
int fd=open("/home/aaa/xxzz.txt",O_RDONLY);
close(0);
dup(fd);
char *param[]={ "/bin/wc",NULL } ;
execve("/bin/wc",param,envp);
}
Probably wc does not live in /bin (except for some systems which symlink that to /usr/bin, because wc normally lives in the latter). If I change the path in your example to /usr/bin/wc, it works for me:
#include <unistd.h>
#include <fcntl.h>
int
main()
{
char *envp[] = {NULL};
int fd = open("/home/aaa/xxzz.txt", O_RDONLY);
close(0);
dup(fd);
char *program = "/usr/bin/wc";
char *param[] = {program,NULL};
execve(program, param, envp);
}

Is there a way can view real process cmdline on linux?

Here is a simple code to fake process name and cmdline on linux:
#include <string.h>
#include <sys/prctl.h>
#include <stdio.h>
#include <unistd.h>
#define NewName "bash"
#define ProcNameMaxLen 16
int main(int argc, char **argv){
int oldlen = strlen(*argv);
char procname[ProcNameMaxLen];
memset(*argv, 0, oldlen);
memccpy(*argv, NewName, 0, oldlen); //modify cmdline
memccpy(procname, NewName, 0, ProcNameMaxLen);
prctl(PR_SET_NAME, procname); //modify procname
sleep(60);
return 0;
}
After run this code I can't view real name by ps,
but something can find in /proc/xxx/exe and /proc/xxx/environ, but so cumbersome.
is there a good way can view real information with all process?
I think this is a big security problem because i usually check process by ps on my server.
way 1: lsof -d txt
Wait more answer...
lsof will tell you the original executable name as it is one of the open files of the malicious process. You can inspect a number of processes using the -p option, or query a single user with the -u option.

How to tell if a downstream process in a Unix pipe has crashed

I have a Linux process (let's call it the main process) whose standard output is piped to another process (called the downstream process) by means of the shell's pipe operator (|). The main process is set up to receive SIGPIPE signals if the downstream process crashes. Unfortunately, SIGPIPE is not raised until the main process writes to stdout. Is there a way to tell sooner that the downstream process has terminated?
One approach is to write continuously to the downstream process, but that seems wasteful. Another approach is to have a separate watchdog process that monitors all relevant processes, but that is complex. Or perhaps there is some way to use select() to trigger the signal. I am hoping that the main process can do all this itself.
It appears the stdout file descriptor becomes "ready for reading" when the receiver crashes:
$ gcc -Wall select-downstream-crash.c -o select-downstream-crash
$ gcc -Wall crash-in-five-seconds.c -o crash-in-five-seconds
$ ./select-downstream-crash | ./crash-in-five-seconds
... five seconds pass ...
stdout is ready for reading
Segmentation fault
select-downstream-crash.c
#include <err.h>
#include <stdio.h>
#include <sys/select.h>
#include <unistd.h>
int main(void)
{
fd_set readfds;
int rc;
FD_ZERO(&readfds);
FD_SET(STDOUT_FILENO, &readfds);
rc = select(STDOUT_FILENO + 1, &readfds, NULL, NULL, NULL);
if (rc < 0)
err(1, "select");
if (FD_ISSET(STDOUT_FILENO, &readfds))
fprintf(stderr, "stdout is ready for reading\n");
return 0;
}
crash-in-five-seconds.c
#include <stdio.h>
#include <unistd.h>
int main(void)
{
sleep(5);
putchar(*(char*)NULL);
return 0;
}
I tried this on Linux, but don't know if it'll work elsewhere. It would be nice to find some documentation explaining this observation.
If the main process forks the other processes, then it will get SIGCHLD notifications when they exit.

Externally disabling signals for a Linux program

On Linux, is it possible to somehow disable signaling for programs externally... that is, without modifying their source code?
Context:
I'm calling a C (and also a Java) program from within a bash script on Linux. I don't want any interruptions for my bash script, and for the other programs that the script launches (as foreground processes).
While I can use a...
trap '' INT
... in my bash script to disable the Ctrl C signal, this works only when the program control happens to be in the bash code. That is, if I press Ctrl C while the C program is running, the C program gets interrupted and it exits! This C program is doing some critical operation because of which I don't want it be interrupted. I don't have access to the source code of this C program, so signal handling inside the C program is out of question.
#!/bin/bash
trap 'echo You pressed Ctrl C' INT
# A C program to emulate a real-world, long-running program,
# which I don't want to be interrupted, and for which I
# don't have the source code!
#
# File: y.c
# To build: gcc -o y y.c
#
# #include <stdio.h>
# int main(int argc, char *argv[]) {
# printf("Performing a critical operation...\n");
# for(;;); // Do nothing forever.
# printf("Performing a critical operation... done.\n");
# }
./y
Regards,
/HS
The process signal mask is inherited across exec, so you can simply write a small wrapper program that blocks SIGINT and executes the target:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
sigset_t sigs;
sigemptyset(&sigs);
sigaddset(&sigs, SIGINT);
sigprocmask(SIG_BLOCK, &sigs, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
If you compile this program to noint, you would just execute ./noint ./y.
As ephemient notes in comments, the signal disposition is also inherited, so you can have the wrapper ignore the signal instead of blocking it:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
struct sigaction sa = { 0 };
sa.sa_handler = SIG_IGN;
sigaction(SIGINT, &sa, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
(and of course for a belt-and-braces approach, you could do both).
The "trap" command is local to this process, never applies to children.
To really trap the signal, you have to hack it using a LD_PRELOAD hook. This is non-trival task (you have to compile a loadable with _init(), sigaction() inside), so I won't include the full code here. You can find an example for SIGSEGV on Phack Volume 0x0b, Issue 0x3a, Phile #0x03.
Alternativlly, try the nohup and tail trick.
nohup your_command &
tail -F nohup.out
I would suggest that your C (and Java) application needs rewriting so that it can handle an exception, what happens if it really does need to be interrupted, power fails, etc...
I that fails, J-16 is right on the money. Does the user need to interract with the process, or just see the output (do they even need to see the output?)
The solutions explained above are not working for me, even by chaining the both commands proposed by Caf.
However, I finally succeeded in getting the expected behavior this way :
#!/bin/zsh
setopt MONITOR
TRAPINT() { print AAA }
print 1
( ./child & ; wait)
print 2
If I press Ctrl-C while child is running, it will wait that it exits, then will print AAA and 2. child will not receive any signals.
The subshell is used to prevent the PID from being shown.
And sorry... this is for zsh though the question is for bash, but I do not know bash enough to provide an equivalent script.
This is example code of enabling signals like Ctrl+C for programs which block it.
fixControlC.c
#include <stdio.h>
#include <signal.h>
int sigaddset(sigset_t *set, int signo) {
printf("int sigaddset(sigset_t *set=%p, int signo=%d)\n", set, signo);
return 0;
}
Compile it:
gcc -fPIC -shared -o fixControlC.so fixControlC.c
Run it:
LD_LIBRARY_PATH=. LD_PRELOAD=fixControlC.so mysqld

Resources