I am creating a pty using openpty in C, and sharing it between master/parent and slave/child. The child could fork/exec and pass on the file descriptor to other programs. I want to inject commands to the child, but if I pass them immediately they get lost. How can I tell from the parent process that someone is blocking on input from stdin? I happen to be working on SUSE 10, but I would prefer a distro independent solution.
Edit : The answer to this question is still interesting to me, but may not be relevant to the problem. I'll get to that later.
A simplified version of the code would be to use the script source code (some of the headers may need to be fixed), and add the lines
char* command = "echo 'Hello World!'\r\n", written = 0;
(void)write(master, command, strlen(command));
(void)write(STDOUT_FILENO, "Sent command\r\n", 14);
before the big
for (;;) {
in main.
I had been executing a csh from script, but I then noticed that the script command was dumping some garbage (as viewed in vi)
^[[>0;115;0c
onto the parent's stdin. If I instead exec a bash shell, nothing gets dumped out and the program injects the command just fine.
I'm still curious as to the answer to the question being asked, but it is clearly no longer relevant to my problem, as there is something else going on. If anyone does know how to see if a pty is being read feel free to answer.
As far as I know, file descriptors will not survive a trip to another process. You can share them between threads, though.
As for knowing when there is something to read, I'd try using select with the appropriate file descriptor in the read set.
I have noticed same problem about losting stuff when I write to master fd.
Problem can be avoided by using the slave fd for writing. And the master fd for stdin of child.
This way:
int main(void)
{
int master_fd = -1;
int slave_fd = -1;
if( openpty( &master_fd, &slave_fd, NULL, NULL, NULL ) != -1 )
{
const pid_t child_pid = fork();
if( child_pid != -1 )
{
if( child_pid )
{
const char command[] = "command\n";
close( master_fd );
write( slave_fd, command, strlen(command) );
close( slave_fd );
}
else
{
close( slave_fd );
dup2( master_fd, STDIN_FILENO );
execlp( "/bin/cat", "cat", (char*)0 );
}
}
}
return 0;
}
You may even add delays to child process and it still works.
So parent process can exit before the child process do anything:
~ # temp_test
~ # command
cat: read error: Input/output error
~ #
EDIT:
Little bit different example, because error print out of cat causes confusing:
if( child_pid )
{
const char command[] = "command\n";
close( master_fd );
write( slave_fd, command, sizeof(command) );
close( slave_fd );
}
else
{
char buffer[100];
ssize_t i;
ssize_t len;
close( slave_fd );
do
{
len = read( master_fd, buffer, sizeof(buffer) );
for( i = 0; i < len; i++ )
printf("%c", buffer[i] );
} while( len > 0 );
}
And result:
~ # temp_test
command
~ #
Related
I am porting a debugger, 'pi' ('process inspector') to Linux and am
working on the code for fork/exec of a child to inspect it. I am
following standard procedure (I believe) but the wait is hanging.
'hang' is the procedure which does the work, the 'cmd' argument being
the name of the binary (a.out) to trace:
int Hostfunc::hang(char *cmd){
char *argv[10], *cp;
int i;
Localproc *p;
struct exec exec;
struct rlimit rlim;
i = strlen(cmd);
if (++i > sizeof(procbuffer)) {
i = sizeof(procbuffer) - 1;
procbuffer[i] = 0;
}
bcopy(cmd, procbuffer, i);
argv[0] = cp = procbuffer;
for(i = 1;;) {
while(*cp && *cp != ' ')
cp++;
if (!*cp) {
argv[i] = 0;
break;
} else {
*cp++ = 0;
while (*cp == ' ')
cp++;
if (*cp)
argv[i++] = cp;
}
}
hangpid = fork();
if (!hangpid){
int fd, nfiles = 20;
if(getrlimit(RLIMIT_NOFILE, &rlim))
nfiles = rlim.rlim_cur;
for( fd = 0; fd < nfiles; ++fd )
close(fd);
open("/dev/null", 2);
dup2(0, 1);
dup2(0, 2);
setpgid(0, 0);
ptrace(PTRACE_TRACEME, 0, 0, 0);
execvp(argv[0], argv);
exit(0);
}
if (hangpid < 0)
return 0;
p = new Localproc;
if (!p) {
kill(9, hangpid);
return 0;
}
p->sigmsk = sigmaskinit();
p->pid = hangpid;
if (!procwait(p, 0)) {
delete p;
return 0;
}
if (p->state.state == UNIX_BREAKED)
p->state.state = UNIX_HALTED;
p->opencnt = 0;
p->next = phead;
phead = p;
return hangpid;
}
I put the 'abort()' in to catch a non-zero return from ptrace,
but that is not happening. The call to 'raise' seems to be a
common practice but a cursory look at gdb's code reveals it is
not used there. In any case it makes no difference to the outcome.
`procwait' is as follows:
int Hostfunc::procwait(Localproc *p, int flag){
int tstat;
int cursig;
again:
if (p->pid != waitpid(p->pid, &tstat, (flag&WAIT_POLL)? WNOHANG: 0))
return 0;
if (flag & WAIT_DISCARD)
return 1;
if (WIFSTOPPED(tstat)) {
cursig = WSTOPSIG(tstat);
if (cursig == SIGSTOP)
p->state.state = UNIX_HALTED;
else if (cursig == SIGTRAP)
p->state.state = UNIX_BREAKED;
else {
if (p->state.state == UNIX_ACTIVE &&
!(p->sigmsk&bit(cursig))) {
ptrace(PTRACE_CONT, p->pid, 1, cursig, 0);
goto again;
}
else {
p->state.state = UNIX_PENDING;
p->state.code = cursig;
}
}
} else {
p->state.state = UNIX_ERRORED;
p->state.code = WEXITSTATUS(tstat) & 0xFFFF;
}
return 1;
}
The 'waitpid' in 'procwait' just hangs. If I run the program with
the above code, and run a 'ps', I can see that 'pi' has forked
but hasn't yet called exec, because the command line is still
'pi', and not the name of the binary I am forking. I discovered
that if I remove the 'raise', 'pi' still hangs but 'ps' now
shows that the forked program has the name of the binary being
examined, which suggests it has performed the exec.
So, as far as I can see, I am following documented procedures to
take control of a forked process but it isn't working.
Noel Hunt
I have found the problem (with my own code, as Nate pointed out), but the cause was obscure until I ran 'strace pi'. It was clear from that that there was a SIGCHLD handler, and it was executing a wait. The parent enters wait, SIGCHLD is delivered, the handler waits and thus reaping the status of the child, then wait is restarted in the parent and hangs because there is no longer any change of state. The SIGCHLD handler makes sense because the pi wants to be informed of state changes in the child. The first version of 'pi' I got working was a Solaris version, and it uses /proc for process control so there was no use of 'wait' to get child status, hence I didn't see this problem in the Solaris version.
I'm trying to check if a file on a jffs2 fs exist from a kernel space. I have a Barrier Breaker OpenWrt with a 3.10.14 Linux kernel. (There's an MTD subsystem in use, so I have pseudo block devices for partitions on a NAND flash (/dev/mtdblock1, ...12).)
(I'm implementing some upgrading logic which requires keeping some state between reboots, to store this state, I use the file.)
To check a file existence I just try to open the file (without an O_CREAT flag) and make a decision based on a result of opening. I use the next article about opening a file from within kernel: Read/write files within a Linux kernel module.
I'm able to check a file existence using this approach, but this file is placed not on a rootfs partition, so I have to mount that partition before I can open the file on it.
I'm able to mount the partition, open file (to check it existence) and close it, if it was opened, but failed to un-mount it: I got an error -16: EBUSY which, as I guess, means that someone else keep using this block device/mount point. So a question who can keep a reference on it?
I think it's a bad idea, but just to test, I tried to un-mount forcibly with an MNT_FORCE, but, as this article https://linux.die.net/man/2/umount states that this option only for NFS fs, nothing changed.
Here's a code:
/* lool at https://stackoverflow.com/questions/1184274/read-write-files-within-a-linux-kernel-module */
static struct file *file_open( const char *path, int flags, int rights )
{
struct file *filp = NULL;
mm_segment_t oldfs;
int err = 0;
oldfs = get_fs();
set_fs( get_ds() );
filp = filp_open( path, flags, rights );
set_fs( oldfs );
if( IS_ERR( filp ) )
{
err = PTR_ERR( filp );
return NULL;
}
return filp;
}
bool need_to_switch_to_me_upgrade_mode( void )
{
struct file* me_upgrade_finished_file;
dev_t me_upgrade_dev;
char full_name[256] = { 0 };
bool result;
int err;
const char* me_upgrade_dir = "/me_upgrade";
const char* me_upgrade_dev_name = "/dev/me_upgrade";
const char* me_upgrade_finished_flag = "/etc/me_upgrade_finished";
// /dev/mtdblock6
const char* me_upgrade_finished_flag_partition = "/dev/mtdblock" str( RECOVERY_ROOTFS_DATA_MTD_PART_NUM );
err = sys_mkdir( (const char __user __force *) me_upgrade_dir, 0700 );
if( err < 0 )
panic( "fail to mkdir %s\n", me_upgrade_dir );
me_upgrade_dev = name_to_dev_t( me_upgrade_finished_flag_partition );
err = create_dev( me_upgrade_dev_name, me_upgrade_dev );
if( err < 0 )
panic( "fail to create_dev %s\n", me_upgrade_dev_name );
err = sys_mount( me_upgrade_dev_name, me_upgrade_dir, "jffs2", MS_SILENT, NULL );
if( err < 0 )
panic( "fail to mount %s on to %s, err: %d\n", me_upgrade_dev_name, me_upgrade_dir, err );
strlcat( full_name, me_upgrade_dir, sizeof( full_name ) );
strlcat( full_name, me_upgrade_finished_flag, sizeof( full_name ) );
me_upgrade_finished_file = file_open( full_name, O_RDONLY, 0 );
if( !me_upgrade_finished_file )
{
printk( "fail to open a %s file\n", full_name );
result = true;
}
else
{
printk( "success to open a file\n" );
result = false;
}
if( me_upgrade_finished_file )
{
err = filp_close( me_upgrade_finished_file, NULL );
printk( "filp_close returned: %d\n", err );
}
err = sys_umount( me_upgrade_dir, MNT_DETACH );
printk( "sys_umount returned: %d\n", err );
sys_unlink( me_upgrade_dev_name ); // destroy_dev( me_upgrade_dev_name );
sys_rmdir( me_upgrade_dir );
return result;
}
This code is called from a kernel_init_freeable function (init/main.c) after we have MTD subsystem initialized (after do_basic_setup() and before a rootfs gets mounted).
So the questions are:
who can keep using a block device/mount point after I closed a file?
is any other ways to check if file exist from within kernel?
I have a second option, just to place my state in a partition without any fs, and check it by performing a raw access to the flash memory, but this approach will require significant changes to user code I have now, so I'm trying to avoid it...
P.S. I tried to change a call to file_open/filp_close by sys_open/sys_close (with the same arguments), but nothing changed...
I have a function that will generate multiple threads and i pass to them a different string every time, but it seems that the threads have the same string. The strings are coming from a socket. Here is the code:
pthread_t *MirrorManager;
MirrorManager = malloc(sizeof(pthread_t)*th_size);
if( MirrorManager == NULL ) { perror("malloc"); exit(1); }
/* -------------------------------------- */
int th_num = 0;
while( true )
{
received = 0;
/* Read the desired readable size */
if( read(newsock, &size, sizeof(size)) < 0 )
{ perror("Read"); exit(1); }
/* Read all data */
while( received < size )
{
if( (nread = read(newsock, buffer + received, size - received)) < 0 )
{ perror("Read"); exit(1); }
received += nread;
}
if( strcmp(buffer, "end") == 0 ) { break; }
printf("Received string: %s\n",buffer);
/* Create thread */
strcpy(th_str, buffer);
if( (err = pthread_create(&MirrorManager[th_num], NULL, thread_start, (void*) th_str)) == true )
{ show_error("pthread_create", err); }
/* Take next thread */
th_num++;
}
Here i generate two threads. The two threads have the same string, actually this string is the last string that will come out of the socket. Why this is happening and how can i prevent this? Please help i have stuck here for a few days now.
You should post the complete code.
Given what you have posted, it looks like your issue is that all of your threads share the same parameter th_str:
pthread_create(&MirrorManager[th_num], NULL, thread_start, (void*) th_str))
Instead you should be allocating a separate th_str for each thread, as you're passing a pointer to it for each thread, rather than the string itself.
th_str = malloc(strlen(buffer));
strcpy(th_str, buffer);
And then be sure to have each thread free the pointer that was passed into it.
PS: I'd strongly recommend using strncmp and strncpy on all data from your socket.
I am creating a shell command from the custom shell to do the ssh from one terminal to another terminal.
In order to do the ssh, I am using the inbuilt ssh command of the linux. Here is my code that does the ssh login.
However, I am seeing that the I/O buffers are not in sync.
This is what I am seeing on the terminal. After SSH to the other terminal. I did the following in the terminal.
PRT# ssh 192.168.10.42
PRT# Could not create directory '/root/.ssh'.
root#192.168.10.42's password:
# screen -r
-sh: cen-: not found
# hello
-sh: el: not found
#
I don't what's the reason here. Here is the code.
int sshLogin(chr *destIp)
{
char cmd[CMD_LEN];
char readbuff[CMD_LEN];
pid_t pid;
int ret = 0;
int fd[2];
int result;
memset(cmd,'\0',sizeof(cmd));
int status = 0;
/** --tt required to force pseudowire allocation because we are behind screen app **/
sprintf(cmd,"/usr/bin/ssh -tt %s",destIp);
/** create a pipe this will be shared on fork() **/
pipe(fd);
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0 )
{
/** Child Process of Main APP --Make this parent process for the command**/
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0)
{
/** basically Main APP grand child - this is where we running the command **/
ret = execlp("ssh", "ssh", "-tt", destIp, NULL);
printf("done execlp\r\n");
}
else
{
/** child of Main APP -- keep this blocked until the Main APP grand child is done with the job **/
while( (read(fd[0], readbuff, sizeof(readbuff))))
{
printf("%s",readbuff);
}
waitpid(0,&status,0);
LOG_STRING("SSH CONNC CLOSED");
exit(0);
}
}
else
{
/** Parent process APP MAIN-- **/
/** no need to wait let APP MAIN run -- **/
}
return 0;
}
Based on Patrick Ideas.
POST 2# - It seems that it works when we close the stdin in the parent process. However, it becomes very slugguish, I feel like I am typing the keyboard too slow. The system becomes too sluggish. Also, I have a web-server from this terminal. I see that I can no longer access the web.
So, the solution is somewhere around stdin but I am not sure.
int sshLogin(chr *destIp)
{
char cmd[CMD_LEN];
char readbuff[CMD_LEN];
pid_t pid;
int ret = 0;
int fd[2];
int result;
memset(cmd,'\0',sizeof(cmd));
int status = 0;
/** --tt required to force pseudowire allocation because we are behind screen app **/
sprintf(cmd,"/usr/bin/ssh -tt %s",destIp);
/** create a pipe this will be shared on fork() **/
pipe(fd);
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0 )
{
/** Child Process of Main APP --Make this parent process for the command**/
if((pid = fork()) == -1)
{
perror("fork:");
return -1;
}
if( pid == 0)
{
/** basically Main APP grand child - this is where we running the command **/
ret = execlp("ssh", "ssh", "-tt", destIp, NULL);
printf("done execlp\r\n");
}
else
{
/** child of Main APP -- keep this blocked until the Main APP grand child is done with the job **/
while( (read(fd[0], readbuff, sizeof(readbuff))))
{
printf("%s",readbuff);
}
waitpid(0,&status,0);
LOG_STRING("SSH CONNC CLOSED");
exit(0);
}
}
else
{
/** Parent process APP MAIN-- **/
/** no need to wait let APP MAIN run -- **/
close(stdin);
}
return 0;
}
Basically, I have added - close(stdin);
You have 2 different processes trying to read from STDIN. This causes process 1 to get char 1, process 2 to get char 2, process 1 to get char 3, process 2 to get char 4, etc, alternating back and forth.
Your 2 processes are:
execlp("ssh", "ssh", "-tt", destIp, NULL);.
while( (read(fd[0], readbuff, sizeof(readbuff))))
Basically you need to ditch the read(fd[0],...).
My initial thought is that perhaps it is buffering the output: stdout is buffered, so unless you print a newline, nothing will be printed until a certain number of characters build up. This is because I/O operations are expensive. You can find more detail on this here. The result is that there is a delay because your program is waiting to print.
My suggestion: in your main function, before calling your sshLogin function, try disabling buffering with this line of code:
setbuf(stdout, NULL);
You can also call fflush(stdout); periodically to do the same thing, but the above method is more efficient. Try it and see if that solves your problem.
I have manages to use ShellExecute in VC++ in order to launch a document.
Now I wish to run a command-line tool that receives some arguments, and to run in the background (as hidden, not minimized) and let it block my program flow, so that i'll be able to wait for it to finish.
How to i alter the command-line of:
ShellExecute(NULL,"open",FULL_PATH_TO_CMD_LINE_TOOL,ARGUMENTS,NULL,SW_HIDE);
The problem is, I have tool that converts html to pdf, and I wish that once the tool finished, aka pdf is ready, to have another ShellExecute to view it.
There is a CodeProject article that shows how, by using ShellExecuteEx instead of ShellExecute:
SHELLEXECUTEINFO ShExecInfo = {0};
ShExecInfo.cbSize = sizeof(SHELLEXECUTEINFO);
ShExecInfo.fMask = SEE_MASK_NOCLOSEPROCESS;
ShExecInfo.hwnd = NULL;
ShExecInfo.lpVerb = NULL;
ShExecInfo.lpFile = "c:\\MyProgram.exe";
ShExecInfo.lpParameters = "";
ShExecInfo.lpDirectory = NULL;
ShExecInfo.nShow = SW_SHOW;
ShExecInfo.hInstApp = NULL;
ShellExecuteEx(&ShExecInfo);
WaitForSingleObject(ShExecInfo.hProcess, INFINITE);
CloseHandle(ShExecInfo.hProcess);
The crucial point is the flag SEE_MASK_NOCLOSEPROCESS, which, as MSDN says
Use to indicate that the hProcess member receives the process handle. This handle is typically used to allow an application to find out when a process created with ShellExecuteEx terminates
Also, note that:
The calling application is responsible for closing the handle when it is no longer needed.
You can also use CreateProcess instead of ShellExecute/ShellExecuteEx. This function includes a cmd.exe wrapper option, returning the exit code, and returning stdout. (The includes may not be perfect).
Notes: In my use, I knew that there had to be stdout results, but the PeekedNamePipe function wouldn't always return the bytes count on the first try, hence the loop there. Perhaps, someone can figure this out and post a revision? Also, maybe an alternate version should be produced which returns stderr separately?
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <Shellapi.h>
/*
Note:
The exitCode for a "Cmd Process" is not the exitCode
for a sub process launched from it! That can be retrieved
via the errorlevel variable in the command line like so:
set errorlevel=&[launch command]&echo.&echo exitCode=%errorlevel%&echo.
The stdOut vector will then contain the exitCode on a seperate line
*/
BOOL executeCommandLine( const CStringW &command,
DWORD &exitCode,
const BOOL asCmdProcess=FALSE,
std::vector<CStringW> *stdOutLines=NULL )
{
// Init return values
BOOL bSuccess = FALSE;
exitCode = 0;
if( stdOutLines ) stdOutLines->clear();
// Optionally prepend cmd.exe to command line to execute
CStringW cmdLine( (asCmdProcess ? L"cmd.exe /C " : L"" ) +
command );
// Create a pipe for the redirection of the STDOUT
// of a child process.
HANDLE g_hChildStd_OUT_Rd = NULL;
HANDLE g_hChildStd_OUT_Wr = NULL;
SECURITY_ATTRIBUTES saAttr;
saAttr.nLength = sizeof(SECURITY_ATTRIBUTES);
saAttr.bInheritHandle = TRUE;
saAttr.lpSecurityDescriptor = NULL;
bSuccess = CreatePipe( &g_hChildStd_OUT_Rd,
&g_hChildStd_OUT_Wr, &saAttr, 0);
if( !bSuccess ) return bSuccess;
bSuccess = SetHandleInformation( g_hChildStd_OUT_Rd,
HANDLE_FLAG_INHERIT, 0 );
if( !bSuccess ) return bSuccess;
// Setup the child process to use the STDOUT redirection
PROCESS_INFORMATION piProcInfo;
STARTUPINFO siStartInfo;
ZeroMemory( &piProcInfo, sizeof(PROCESS_INFORMATION) );
ZeroMemory( &siStartInfo, sizeof(STARTUPINFO) );
siStartInfo.cb = sizeof(STARTUPINFO);
siStartInfo.hStdError = g_hChildStd_OUT_Wr;
siStartInfo.hStdOutput = g_hChildStd_OUT_Wr;
siStartInfo.dwFlags |= STARTF_USESTDHANDLES;
// Execute a synchronous child process & get exit code
bSuccess = CreateProcess( NULL,
cmdLine.GetBuffer(), // command line
NULL, // process security attributes
NULL, // primary thread security attributes
TRUE, // handles are inherited
0, // creation flags
NULL, // use parent's environment
NULL, // use parent's current directory
&siStartInfo, // STARTUPINFO pointer
&piProcInfo ); // receives PROCESS_INFORMATION
if( !bSuccess ) return bSuccess;
WaitForSingleObject( piProcInfo.hProcess, (DWORD)(-1L) );
GetExitCodeProcess( piProcInfo.hProcess, &exitCode );
CloseHandle( piProcInfo.hProcess );
CloseHandle( piProcInfo.hThread );
// Return if the caller is not requesting the stdout results
if( !stdOutLines ) return TRUE;
// Read the data written to the pipe
DWORD bytesInPipe = 0;
while( bytesInPipe==0 ){
bSuccess = PeekNamedPipe( g_hChildStd_OUT_Rd, NULL, 0, NULL,
&bytesInPipe, NULL );
if( !bSuccess ) return bSuccess;
}
if( bytesInPipe == 0 ) return TRUE;
DWORD dwRead;
CHAR *pipeContents = new CHAR[ bytesInPipe ];
bSuccess = ReadFile( g_hChildStd_OUT_Rd, pipeContents,
bytesInPipe, &dwRead, NULL);
if( !bSuccess || dwRead == 0 ) return FALSE;
// Split the data into lines and add them to the return vector
std::stringstream stream( pipeContents );
std::string str;
while( getline( stream, str ) )
stdOutLines->push_back( CStringW( str.c_str() ) );
return TRUE;
}
Using ShellExecuteEx sometimes doesn't work if COM is used, so the following remarks must be considered.
Because ShellExecuteEx can delegate execution to Shell extensions
(data sources, context menu handlers, verb implementations) that are
activated using Component Object Model (COM), COM should be
initialized before ShellExecuteEx is called. Some Shell extensions
require the COM single-threaded apartment (STA) type. In that case,
COM should be initialized as shown here:
CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE)
There are instances where ShellExecuteEx does not use one of these
types of Shell extension and those instances would not require COM to
be initialized at all. Nonetheless, it is good practice to always
initalize COM before using this function.
More from MSDN here
https://learn.microsoft.com/en-us/windows/win32/api/shellapi/nf-shellapi-shellexecuteexa