I am creating a shell command from the custom shell to do the ssh from one terminal to another terminal.
In order to do the ssh, I am using the inbuilt ssh command of the linux. Here is my code that does the ssh login.
However, I am seeing that the I/O buffers are not in sync.
This is what I am seeing on the terminal. After SSH to the other terminal. I did the following in the terminal.
PRT# ssh 192.168.10.42
PRT# Could not create directory '/root/.ssh'.
root#192.168.10.42's password:
# screen -r
-sh: cen-: not found
# hello
-sh: el: not found
#
I don't what's the reason here. Here is the code.
int sshLogin(chr *destIp)
{
char cmd[CMD_LEN];
char readbuff[CMD_LEN];
pid_t pid;
int ret = 0;
int fd[2];
int result;
memset(cmd,'\0',sizeof(cmd));
int status = 0;
/** --tt required to force pseudowire allocation because we are behind screen app **/
sprintf(cmd,"/usr/bin/ssh -tt %s",destIp);
/** create a pipe this will be shared on fork() **/
pipe(fd);
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0 )
{
/** Child Process of Main APP --Make this parent process for the command**/
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0)
{
/** basically Main APP grand child - this is where we running the command **/
ret = execlp("ssh", "ssh", "-tt", destIp, NULL);
printf("done execlp\r\n");
}
else
{
/** child of Main APP -- keep this blocked until the Main APP grand child is done with the job **/
while( (read(fd[0], readbuff, sizeof(readbuff))))
{
printf("%s",readbuff);
}
waitpid(0,&status,0);
LOG_STRING("SSH CONNC CLOSED");
exit(0);
}
}
else
{
/** Parent process APP MAIN-- **/
/** no need to wait let APP MAIN run -- **/
}
return 0;
}
Based on Patrick Ideas.
POST 2# - It seems that it works when we close the stdin in the parent process. However, it becomes very slugguish, I feel like I am typing the keyboard too slow. The system becomes too sluggish. Also, I have a web-server from this terminal. I see that I can no longer access the web.
So, the solution is somewhere around stdin but I am not sure.
int sshLogin(chr *destIp)
{
char cmd[CMD_LEN];
char readbuff[CMD_LEN];
pid_t pid;
int ret = 0;
int fd[2];
int result;
memset(cmd,'\0',sizeof(cmd));
int status = 0;
/** --tt required to force pseudowire allocation because we are behind screen app **/
sprintf(cmd,"/usr/bin/ssh -tt %s",destIp);
/** create a pipe this will be shared on fork() **/
pipe(fd);
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0 )
{
/** Child Process of Main APP --Make this parent process for the command**/
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0)
{
/** basically Main APP grand child - this is where we running the command **/
ret = execlp("ssh", "ssh", "-tt", destIp, NULL);
printf("done execlp\r\n");
}
else
{
/** child of Main APP -- keep this blocked until the Main APP grand child is done with the job **/
while( (read(fd[0], readbuff, sizeof(readbuff))))
{
printf("%s",readbuff);
}
waitpid(0,&status,0);
LOG_STRING("SSH CONNC CLOSED");
exit(0);
}
}
else
{
/** Parent process APP MAIN-- **/
/** no need to wait let APP MAIN run -- **/
close(stdin);
}
return 0;
}
Basically, I have added - close(stdin);
You have 2 different processes trying to read from STDIN. This causes process 1 to get char 1, process 2 to get char 2, process 1 to get char 3, process 2 to get char 4, etc, alternating back and forth.
Your 2 processes are:
execlp("ssh", "ssh", "-tt", destIp, NULL);.
while( (read(fd[0], readbuff, sizeof(readbuff))))
Basically you need to ditch the read(fd[0],...).
My initial thought is that perhaps it is buffering the output: stdout is buffered, so unless you print a newline, nothing will be printed until a certain number of characters build up. This is because I/O operations are expensive. You can find more detail on this here. The result is that there is a delay because your program is waiting to print.
My suggestion: in your main function, before calling your sshLogin function, try disabling buffering with this line of code:
setbuf(stdout, NULL);
You can also call fflush(stdout); periodically to do the same thing, but the above method is more efficient. Try it and see if that solves your problem.
Related
I am porting a debugger, 'pi' ('process inspector') to Linux and am
working on the code for fork/exec of a child to inspect it. I am
following standard procedure (I believe) but the wait is hanging.
'hang' is the procedure which does the work, the 'cmd' argument being
the name of the binary (a.out) to trace:
int Hostfunc::hang(char *cmd){
char *argv[10], *cp;
int i;
Localproc *p;
struct exec exec;
struct rlimit rlim;
i = strlen(cmd);
if (++i > sizeof(procbuffer)) {
i = sizeof(procbuffer) - 1;
procbuffer[i] = 0;
}
bcopy(cmd, procbuffer, i);
argv[0] = cp = procbuffer;
for(i = 1;;) {
while(*cp && *cp != ' ')
cp++;
if (!*cp) {
argv[i] = 0;
break;
} else {
*cp++ = 0;
while (*cp == ' ')
cp++;
if (*cp)
argv[i++] = cp;
}
}
hangpid = fork();
if (!hangpid){
int fd, nfiles = 20;
if(getrlimit(RLIMIT_NOFILE, &rlim))
nfiles = rlim.rlim_cur;
for( fd = 0; fd < nfiles; ++fd )
close(fd);
open("/dev/null", 2);
dup2(0, 1);
dup2(0, 2);
setpgid(0, 0);
ptrace(PTRACE_TRACEME, 0, 0, 0);
execvp(argv[0], argv);
exit(0);
}
if (hangpid < 0)
return 0;
p = new Localproc;
if (!p) {
kill(9, hangpid);
return 0;
}
p->sigmsk = sigmaskinit();
p->pid = hangpid;
if (!procwait(p, 0)) {
delete p;
return 0;
}
if (p->state.state == UNIX_BREAKED)
p->state.state = UNIX_HALTED;
p->opencnt = 0;
p->next = phead;
phead = p;
return hangpid;
}
I put the 'abort()' in to catch a non-zero return from ptrace,
but that is not happening. The call to 'raise' seems to be a
common practice but a cursory look at gdb's code reveals it is
not used there. In any case it makes no difference to the outcome.
`procwait' is as follows:
int Hostfunc::procwait(Localproc *p, int flag){
int tstat;
int cursig;
again:
if (p->pid != waitpid(p->pid, &tstat, (flag&WAIT_POLL)? WNOHANG: 0))
return 0;
if (flag & WAIT_DISCARD)
return 1;
if (WIFSTOPPED(tstat)) {
cursig = WSTOPSIG(tstat);
if (cursig == SIGSTOP)
p->state.state = UNIX_HALTED;
else if (cursig == SIGTRAP)
p->state.state = UNIX_BREAKED;
else {
if (p->state.state == UNIX_ACTIVE &&
!(p->sigmsk&bit(cursig))) {
ptrace(PTRACE_CONT, p->pid, 1, cursig, 0);
goto again;
}
else {
p->state.state = UNIX_PENDING;
p->state.code = cursig;
}
}
} else {
p->state.state = UNIX_ERRORED;
p->state.code = WEXITSTATUS(tstat) & 0xFFFF;
}
return 1;
}
The 'waitpid' in 'procwait' just hangs. If I run the program with
the above code, and run a 'ps', I can see that 'pi' has forked
but hasn't yet called exec, because the command line is still
'pi', and not the name of the binary I am forking. I discovered
that if I remove the 'raise', 'pi' still hangs but 'ps' now
shows that the forked program has the name of the binary being
examined, which suggests it has performed the exec.
So, as far as I can see, I am following documented procedures to
take control of a forked process but it isn't working.
Noel Hunt
I have found the problem (with my own code, as Nate pointed out), but the cause was obscure until I ran 'strace pi'. It was clear from that that there was a SIGCHLD handler, and it was executing a wait. The parent enters wait, SIGCHLD is delivered, the handler waits and thus reaping the status of the child, then wait is restarted in the parent and hangs because there is no longer any change of state. The SIGCHLD handler makes sense because the pi wants to be informed of state changes in the child. The first version of 'pi' I got working was a Solaris version, and it uses /proc for process control so there was no use of 'wait' to get child status, hence I didn't see this problem in the Solaris version.
I want to know how rsh runs any command. I am using netkit-rsh-0.17 package. My OS is centOS.
In rshd directory, rshd.c performs the task to run any command on server.
In this file, doit() is the main function who performs all the task.
Questions,
What pwd->pw_dir, pwd->pw_uid, pwd->pw_shell means in this code?
What pv does in this.
Explain me by using rsh localhost ulimit -n command.
doit()
static void
doit(struct sockaddr_in *fromp)
{
char cmdbuf[ARG_MAX+1];
const char *theshell, *shellname;
char locuser[16], remuser[16];
struct passwd *pwd;
int sock = -1;
const char *hostname;
u_short port;
int pv[2], pid, ifd;
signal(SIGINT, SIG_DFL);
signal(SIGQUIT, SIG_DFL);
signal(SIGTERM, SIG_DFL);
alarm(60);
port = getint();
alarm(0);
if (port != 0) {
int lport = IPPORT_RESERVED - 1;
sock = rresvport(&lport);
if (sock < 0) {
syslog(LOG_ERR, "can't get stderr port: %m");
exit(1);
}
if (port >= IPPORT_RESERVED) {
syslog(LOG_ERR, "2nd port not reserved\n");
exit(1);
}
fromp->sin_port = htons(port);
if (connect(sock, (struct sockaddr *)fromp,
sizeof(*fromp)) < 0) {
syslog(LOG_INFO, "connect second port: %m");
exit(1);
}
}
#if 0
/* We're running from inetd; socket is already on 0, 1, 2 */
dup2(f, 0);
dup2(f, 1);
dup2(f, 2);
#endif
getstr(remuser, sizeof(remuser), "remuser");
getstr(locuser, sizeof(locuser), "locuser");
getstr(cmdbuf, sizeof(cmdbuf), "command");
if (!strcmp(locuser, "root")) paranoid = 1;
hostname = findhostname(fromp, remuser, locuser, cmdbuf);
setpwent();
pwd = doauth(remuser, hostname, locuser);
if (pwd == NULL) {
fail("Permission denied.\n",
remuser, hostname, locuser, cmdbuf);
}
if (chdir(pwd->pw_dir) < 0) {
chdir("/");
/*
* error("No remote directory.\n");
* exit(1);
*/
}
if (pwd->pw_uid != 0 && !access(_PATH_NOLOGIN, F_OK)) {
error("Logins currently disabled.\n");
exit(1);
}
(void) write(2, "\0", 1);
sent_null = 1;
if (port) {
if (pipe(pv) < 0) {
error("Can't make pipe.\n");
exit(1);
}
pid = fork();
if (pid == -1) {
error("Can't fork; try again.\n");
exit(1);
}
if (pid) {
close(0);
close(1);
close(2);
close(pv[1]);
stderr_parent(sock, pv[0], pid);
/* NOTREACHED */
}
setpgrp();
close(sock);
close(pv[0]);
dup2(pv[1], 2);
close(pv[1]);
}
theshell = pwd->pw_shell;
if (!theshell || !*theshell) {
/* shouldn't we deny access? */
theshell = _PATH_BSHELL;
}
#if BSD > 43
if (setlogin(pwd->pw_name) < 0) {
syslog(LOG_ERR, "setlogin() failed: %m");
}
#endif
#ifndef USE_PAM
/* if PAM, already done */
if (setgid(pwd->pw_gid)) {
syslog(LOG_ERR, "setgid: %m");
exit(1);
}
if (initgroups(pwd->pw_name, pwd->pw_gid)) {
syslog(LOG_ERR, "initgroups: %m");
exit(1);
}
#endif
if (setuid(pwd->pw_uid)) {
syslog(LOG_ERR, "setuid: %m");
exit(1);
}
environ = envinit;
strncat(homedir, pwd->pw_dir, sizeof(homedir)-6);
homedir[sizeof(homedir)-1] = 0;
strcat(path, _PATH_DEFPATH);
strncat(shell, theshell, sizeof(shell)-7);
shell[sizeof(shell)-1] = 0;
strncat(username, pwd->pw_name, sizeof(username)-6);
username[sizeof(username)-1] = 0;
shellname = strrchr(theshell, '/');
if (shellname) shellname++;
else shellname = theshell;
endpwent();
if (paranoid) {
syslog(LOG_INFO|LOG_AUTH, "%s#%s as %s: cmd='%s'",
remuser, hostname, locuser, cmdbuf);
}
/*
* Close all fds, in case libc has left fun stuff like
* /etc/shadow open.
*/
for (ifd = getdtablesize()-1; ifd > 2; ifd--) close(ifd);
execl(theshell, shellname, "-c", cmdbuf, 0);
perror(theshell);
exit(1);
}
struct passwd is documented in POSIX, in pwd.h. It is a structure used to store the /etc/passwd entries for a given user. The three you mention are these:
uid_t pw_uid
Numerical user ID.
char *pw_dir
Initial working directory. (Home directory.)
char *pw_shell
Program to use as shell. (Default shell for the user.)
The function doauth referenced in the code above probably either calls getpwent or simulates that to fill in the appropriate values for the user on the remote system.
pv is pair of file descriptors representing connected pipes, set up by pipe(). pv[0] is the "read side", pv[1] the "write side". Anything written to pv[1] can be read from pv[0].
In the code above, the parent process does:
close(pv[1]);
stderr_parent(sock, pv[0], pid);
which closes the write side, and, I'm guessing, wires the read side to (one of) the sockets used to communicate between the hosts.
The child process on the other hand does this:
close(pv[0]); // close the read side
dup2(pv[1], 2); // clone the write side to fd n° 2 (stderr)
close(pv[1]); // close the original write side (now only
// writable through fd n° 2
So basically, the child's stderr stream is now connected to a network stream back to the client.
The rest of the code essentially sanitizes the environment (environment variables and working directory), checks permissions, sets the appropriate uid/gid and finally executes the command that the user wanted to run using execl() via a shell. The actual command run on the remote system will be something like /bin/sh -c <user command string>.
So with your example, assuming for example that your user's shell in /etc/passwd is /bin/bash, the execl call will result in running this:
/bin/bash -c 'ulimit -n'
(Quotes since the user command is a single argument in the execl call, it is not tokenized.)
I have two sets of code both trying to execute something like ls|grep pip
One that works and one that does not.
The working code creates 2 child process and uses one child each to execlp the one command and the other simply tries to do this by creating one child. I.e executing ls in say the child and the grep in the parent. This does not seem to work. And I can't seem to get any error either.
Can someone tell me what the problem is? And why it exists?
Not Working:
void runpipe()
{
pid_t childpid;
int fd[2];
pipe(fd);
int saved_stdout;
int saved_stdin;
saved_stdout=dup(STDOUT_FILENO);
saved_stdin=dup(STDIN_FILENO);
if((childpid=fork())==0)
{
dup2(fd[WRITE_END],STDOUT_FILENO);
close(fd[WRITE_END]);
execlp("/bin/ls","ls command","-l",NULL);
dup2(STDOUT_FILENO,fd[1]);
_exit(0);
}
else if(childpid>0)
{
dup2(saved_stdout,STDOUT_FILENO);
dup2(fd[READ_END],STDIN_FILENO);
close(fd[READ_END]);
execlp("/bin/grep","grep","pip",NULL);
wait();
_exit(0);
}
else
{
printf("ERROR!\n");
}
}
Here are the codes:
Working:
int runpipe(){
pid_t pid;
int fd[2];
pipe(fd);
int i;
pid=fork();
if (pid==0) {
printf("i'm the child used for ls \n");
dup2(fd[WRITE_END], STDOUT_FILENO);
close(fd[READ_END]);
execlp("ls", "ls", "-al", NULL);
_exit(0);
} else {
pid=fork();
if (pid==0) {
printf("i'm in the second child, which will be used to grep\n");
dup2(fd[READ_END], STDIN_FILENO);
close(fd[WRITE_END]);
execlp("grep", "grep","pip",NULL);
}
else wait();
}
return 0;
}
The parent needs to close the write side of the pipe before exec'ing grep. For some reason, your code with the two children closes that file descriptor, but does not in the code with only one child. You are leaving several descriptors open, but the write side on the pipe is the important one. The reader (the exec'd grep) will not finish until all copies of the write side of the pipe are closed. By failing to close it, the grep is the one holding it open so grep will never terminate, but just wait for more data.
I am creating a pty using openpty in C, and sharing it between master/parent and slave/child. The child could fork/exec and pass on the file descriptor to other programs. I want to inject commands to the child, but if I pass them immediately they get lost. How can I tell from the parent process that someone is blocking on input from stdin? I happen to be working on SUSE 10, but I would prefer a distro independent solution.
Edit : The answer to this question is still interesting to me, but may not be relevant to the problem. I'll get to that later.
A simplified version of the code would be to use the script source code (some of the headers may need to be fixed), and add the lines
char* command = "echo 'Hello World!'\r\n", written = 0;
(void)write(master, command, strlen(command));
(void)write(STDOUT_FILENO, "Sent command\r\n", 14);
before the big
for (;;) {
in main.
I had been executing a csh from script, but I then noticed that the script command was dumping some garbage (as viewed in vi)
^[[>0;115;0c
onto the parent's stdin. If I instead exec a bash shell, nothing gets dumped out and the program injects the command just fine.
I'm still curious as to the answer to the question being asked, but it is clearly no longer relevant to my problem, as there is something else going on. If anyone does know how to see if a pty is being read feel free to answer.
As far as I know, file descriptors will not survive a trip to another process. You can share them between threads, though.
As for knowing when there is something to read, I'd try using select with the appropriate file descriptor in the read set.
I have noticed same problem about losting stuff when I write to master fd.
Problem can be avoided by using the slave fd for writing. And the master fd for stdin of child.
This way:
int main(void)
{
int master_fd = -1;
int slave_fd = -1;
if( openpty( &master_fd, &slave_fd, NULL, NULL, NULL ) != -1 )
{
const pid_t child_pid = fork();
if( child_pid != -1 )
{
if( child_pid )
{
const char command[] = "command\n";
close( master_fd );
write( slave_fd, command, strlen(command) );
close( slave_fd );
}
else
{
close( slave_fd );
dup2( master_fd, STDIN_FILENO );
execlp( "/bin/cat", "cat", (char*)0 );
}
}
}
return 0;
}
You may even add delays to child process and it still works.
So parent process can exit before the child process do anything:
~ # temp_test
~ # command
cat: read error: Input/output error
~ #
EDIT:
Little bit different example, because error print out of cat causes confusing:
if( child_pid )
{
const char command[] = "command\n";
close( master_fd );
write( slave_fd, command, sizeof(command) );
close( slave_fd );
}
else
{
char buffer[100];
ssize_t i;
ssize_t len;
close( slave_fd );
do
{
len = read( master_fd, buffer, sizeof(buffer) );
for( i = 0; i < len; i++ )
printf("%c", buffer[i] );
} while( len > 0 );
}
And result:
~ # temp_test
command
~ #
I'm working on an application that contains several server sockets that each run in a unique thread.
An external utility (script) is called by one of the threads. This script calls a utility (client) that sends a message to one of the server sockets.
Initially, I was using system() to execute this external script, but we couldn't use that because we had to make sure the server sockets were closed in the child that was forked to execute the external script.
I now call fork() and execvp() myself. I fork() and then in the child I close all the server sockets and then call execvp() to execute the script.
Now, all of that works fine. The problem is that at times the script reports errors to the server app. The script sends these errors by calling another application (client) which opens a TCP socket and sends the appropriate data. My issue is that the client app gets a value of 0 returned by the socket() system call.
NOTE: This ONLY occurs when the script/client app is called using my forkExec() function. If the script/client app is called manually the socket() call performs appropriately and things work fine.
Based on that information I suspect it's something in my fork() execvp() code below... Any ideas?
void forkExec()
{
int stat;
stat = fork();
if (stat < 0)
{
printf("Error forking child: %s", strerror(errno));
}
else if (stat == 0)
{
char *progArgs[3];
/*
* First, close the file descriptors that the child
* shouldn't keep open
*/
close(ServerFd);
close(XMLSocket);
close(ClientFd);
close(EventSocket);
close(monitorSocket);
/* build the arguments for script */
progArgs[0] = calloc(1, strlen("/path_to_script")+1);
strcpy(progArgs[0], "/path_to_script");
progArgs[1] = calloc(1, strlen(arg)+1);
strcpy(progArgs[1], arg);
progArgs[2] = NULL; /* Array of args must be NULL terminated for execvp() */
/* launch the script */
stat = execvp(progArgs[0], progArgs);
if (stat != 0)
{
printf("Error executing script: '%s' '%s' : %s", progArgs[0], progArgs[1], strerror(errno));
}
free(progArgs[0]);
free(progArgs[1]);
exit(0);
}
return;
}
Client app code:
static int connectToServer(void)
{
int socketFD = 0;
int status;
struct sockaddr_in address;
struct hostent* hostAddr = gethostbyname("localhost");
socketFD = socket(PF_INET, SOCK_STREAM, 0);
The above call returns 0.
if (socketFD < 0)
{
fprintf(stderr, "%s-%d: Failed to create socket: %s",
__func__, __LINE__, strerror(errno));
return (-1);
}
memset(&address, 0, sizeof(struct sockaddr));
address.sin_family = AF_INET;
memcpy(&(address.sin_addr.s_addr), hostAddr->h_addr, hostAddr->h_length);
address.sin_port = htons(POLLING_SERVER_PORT);
status = connect(socketFD, (struct sockaddr *)&address, sizeof(address));
if (status < 0)
{
if (errno != ECONNREFUSED)
{
fprintf(stderr, "%s-%d: Failed to connect to server socket: %s",
__func__, __LINE__, strerror(errno));
}
else
{
fprintf(stderr, "%s-%d: Server not yet available...%s",
__func__, __LINE__, strerror(errno));
close(socketFD);
socketFD = 0;
}
}
return socketFD;
}
FYI
OS: Linux
Arch: ARM32
Kernel: 2.6.26
socket() returns -1 on error.
A return of 0 means socket() succeeded and gave you file descriptor 0. I suspect that one of the file descriptors that you close has file descriptor 0 and once it's closed the next call to a function that allocated a file descriptor will return fd 0 as it's available.
A socket with value 0 is fine, it means stdin was closed which will make fd 0 available for reuse - such as by a socket.
chances are one of the filedescriptors you close in the forkExec() child path(XMLSocket/ServerFd) etc.) was fd 0 . That'll start the child with fd 0 closed, which won't happen when you run the app from a command line, as fd 0 will be already open as the stdin of the shell.
If you want your socket to not be 0,1 or 2 (stdin/out/err) call the following in your forkExec() function after all the close() calls
void reserve_tty()
{
int fd;
for(fd=0; fd < 3; fd++)
int nfd;
nfd = open("/dev/null", O_RDWR);
if(nfd<0) /* We're screwed. */
continue;
if(nfd==fd)
continue;
dup2(nfd, fd);
if(nfd > 2)
close(nfd);
}
Check for socket returning -1 which means an error occured.
Don't forget a call to
waitpid()
End of "obvious question mode". I'm assuming a bit here but you're not doing anything with the pid returned by the fork() call. (-:
As it is mentioned in another comment, you really should not close 0,1 or 2 (stdin/out/err), you can put a check to make sure you do not close those and so it will not be assigned as new fd`s when you request for a new socket