I'm implementing features of an ssh server, so given a shell request I open a pty-tty pair.
A snippet:
import (
"github.com/creack/pty"
...
)
func attachPty(channel ssh.Channel, shell *exec.Cmd) {
mypty, err := pty.Start(shell)
go func() {
io.Copy(channel, mypty) // (1) ; could also be substituted with read() syscall, same problem
}
go func() {
io.Copy(mypty, channel) // (2) - this returns on channel exit with eof, so let's close mypty
if err := syscall.Close(int(mypty.Fd())); err != nil {
fmt.Printf("error closing fd") // no error is printed out, /proc/fd shows it's successfuly closed
}
}
}
Once the ssh channel gets closed, I close the pty. My expected behavior is that it should send SIGHUP to the shell.
If I comment out the (1) copy (src: mypty, dst: channel), it works!
However - when it's not commented out:
The (1) copy doesn't return, meaning the read syscall from mypty is still blocking, and doesn't return eof => master device doesn't get closed?
shell doesn't get SIGHUP
I'm not sure why if I comment out the (1) copy it works, maybe the kernel reference counts the reads?
My leads:
pty.read is actually dispatched to the tty, as said in:
pty master missing read function
Walkthrough of SIGHUP flow
pty_close in drivers/tty/pty.c, which calls tty_vhangup(tty->link);, see here
Linux Device Drivers, 3rd edition, PTY chapter
Go notes:
I close the fd directly, because otherwise using the usual os.File.close() doesn't actually close the fd for some reason, it stays open in /proc/<pid>/fd
substituting the (1) copy with a direct read syscall would lead to the same outcome
Thank you!
Related
Is there any way for a writer to know that a reader has closed its end of a named pipe (or exited), without writing to it?
I need to know this because the initial data I write to the pipe is different; the reader is expecting an initial header before the rest of the data comes.
Currently, I detect this when my write() fails with EPIPE. I then set a flag that says "next time, send the header". However, it is possible for the reader to close and re-open the pipe before I've written anything. In this case, I never realize what he's done, and don't send the header he is expecting.
Is there any sort of async event type thing that might help here? I'm not seeing any signals being sent.
Note that I haven't included any language tags, because this question should be considered language-agnostic. My code is Python, but the answers should apply to C, or any other language with system call-level bindings.
If you are using an event loop that is based on the poll system call you can register the pipe with an event mask that contains EPOLLERR. In Python, with select.poll,
import select
fd = open("pipe", "w")
poller = select.poll()
poller.register(fd, select.POLLERR)
poller.poll()
will wait until the pipe is closed.
To test this, run mkfifo pipe, start the script, and in another terminal run, for example, cat pipe. As soon as you quit the cat process, the script will terminate.
Oddly enough, it appears that when the last reader closes the pipe, select indicates that the pipe is readable:
writer.py
#!/usr/bin/env python
import os
import select
import time
NAME = 'fifo2'
os.mkfifo(NAME)
def select_test(fd, r=True, w=True, x=True):
rset = [fd] if r else []
wset = [fd] if w else []
xset = [fd] if x else []
t0 = time.time()
r,w,x = select.select(rset, wset, xset)
print 'After {0} sec:'.format(time.time() - t0)
if fd in r: print ' {0} is readable'.format(fd)
if fd in w: print ' {0} is writable'.format(fd)
if fd in x: print ' {0} is exceptional'.format(fd)
try:
fd = os.open(NAME, os.O_WRONLY)
print '{0} opened for writing'.format(NAME)
print 'select 1'
select_test(fd)
os.write(fd, 'test')
print 'wrote data'
print 'select 2'
select_test(fd)
print 'select 3 (no write)'
select_test(fd, w=False)
finally:
os.unlink(NAME)
Demo:
Terminal 1:
$ ./pipe_example_simple.py
fifo2 opened for writing
select 1
After 1.59740447998e-05 sec:
3 is writable
wrote data
select 2
After 2.86102294922e-06 sec:
3 is writable
select 3 (no write)
After 2.15910816193 sec:
3 is readable
Terminal 2:
$ cat fifo2
test
# (wait a sec, then Ctrl+C)
There is no such mechanism. Generally, according to the UNIX-way, there are no signals for streams opening or closing, on either end. This can only be detected by reading or writing to them (accordingly).
I would say this is wrong design. Currently you are trying to have the receiver signal their availability to receive by opening a pipe. So either you implement this signaling in an appropriate way, or incorporate the "closing logic" in the sending part of the pipe.
I am developing a device driver on mac. my question is how can we make a device request asynchronous to synchronous. like i send a send encapsulated command to device and get it response using get encapsulated command after getting a notification on interrupt pipe. so how can i make my thread will wait until all above request is not completed (both send and get) . but the function from get encap is called is a virtual function and called by upper layer. so if i process a wait in that virtual function then i am not able to get response till my tread is in waiting process.
please help me to resolve this problem.
thnks in advance.
**
bool class::USBSetPacketFilter()
{
IOReturn Value
.... .................
value = send_Encasulated_command(* of structure, length);
IOLocksleepdeadline(x, y, z, w);
global variable which is updated when get_Encap is completed.
if (Value == IOSuccess &&XVZ == true)
return true;
else return false
}
**
in other function to readinterrupt pipe
pipe->Read(mMemDes,&**m_CommInfo**,NULL);
in m_CommInfo call back function we check it is a device response or not then we call get_encapsulated function to complete the request and IOLockwakeup(x,y,z) to revoke the thread
.
but when the upper layer call USBSetPacketFilter(). MY code stuck on IOLocksleepdeadline till the time out is not completed. so thread did not go to read interputpipe.
It seems using pipe in threads might cause the threads turn into zombie. In fact the commands in the pipe truned into zombie, not the threads. This does not happen very time which is annoying since it's hard to find out the real problem. How to deal with this issue? What causes these? Was it related to the pipe? How to avoid this?
The following is the codes that creates sample files.
#buildTest.pl
use strict;
use warnings;
sub generateChrs{
my ($outfile, $num, $range)=#_;
open OUTPUT, "|gzip>$outfile";
my #set=('A','T','C','G');
my $cnt=0;
while ($cnt<$num) {
# body...
my $pos=int(rand($range));
my $str = join '' => map $set[rand #set], 1 .. rand(200)+1;
print OUTPUT "$cnt\t$pos\t$str\n";
$cnt++
}
close OUTPUT;
}
sub new_chr{
my #chrs=1..22;
push #chrs,("X","Y","M", "Other");
return #chrs;
}
for my $chr (&new_chr){
generateChrs("$chr.gz",50000,100000)
}
The following codes will create zombie threads occasionally. Reason or trigger remains unknown.
#paralRM.pl
use strict;
use threads;
use Thread::Semaphore;
my $s = Thread::Semaphore->new(10);
sub rmDup{
my $reads_chr=$_[0];
print "remove duplication $reads_chr START TIME: ",`date`;
return 0 if(!-s $reads_chr);
my $dup_removed_file=$reads_chr . ".rm.gz";
$s->down();
open READCHR, "gunzip -c $reads_chr |sort -n -k2 |" or die "Error: cannot open $reads_chr";
open OUTPUT, "|sort -k4 -n|gzip>$dup_removed_file";
my ($last_id, $last_pos, $last_reads)=split('\t',<READCHR>);
chomp($last_reads);
my $last_length=length($last_reads);
my $removalCnts=0;
while (<READCHR>) {
chomp;
my #line=split('\t',$_);
my ($id, $pos, $reads)=#line;
my $cur_length=length($reads);
if($last_pos==$pos){
#may dup
if($cur_length>$last_length){
($last_id, $last_pos, $last_reads)=#line;
$last_length=$cur_length;
}
$removalCnts++;
next;
}else{
#not dup
}
print OUTPUT join("\t",$last_id, $last_pos, $last_reads, $last_length, "\n");
($last_id, $last_pos, $last_reads)=#line;
$last_length=$cur_length;
}
print OUTPUT join("\t",$last_id, $last_pos, $last_reads, $last_length, "\n");
close OUTPUT;
close READCHR;
$s->up();
print "remove duplication $reads_chr END TIME: ",`date`;
#unlink("$reads_chr")
return $removalCnts;
}
sub parallelRMdup{
my #chrs=#_;
my %jobs;
my #removedCnts;
my #processing;
foreach my $chr(#chrs){
while (${$s}<=0) {
# body...
sleep 10;
}
$jobs{$chr}=async {
return &rmDup("$chr.gz")
}
push #processing, $chr;
};
#wait for all threads finish
foreach my $chr(#processing){
push #removedCnts, $jobs{$chr}->join();
}
}
sub new_chr{
my #chrs=1..22;
push #chrs,("X","Y","M", "Other");
return #chrs;
}
¶llelRMdup(&new_chr);
As the comments on your originating post suggest - there isn't anything obviously wrong with your code here. What might be helpful to understand is what a zombie process is.
Specifically - it's a spawned process (by your open) which has exited, but the parent hasn't collected it's return code yet.
For short running code, that's not all that significant - when your main program exits, the zombies will 'reparent' to init which will clean them up automatically.
For longer running, you can use waitpid to clean them up and collect return codes.
Now in this specific case - I can't see a specific problem, but I would guess it's to do with how you're opening your filehandles. The downside of opening filehandles like you are, is that they're globally scoped - and that's just generally bad news when you're doing thready things.
I would imagine if you changed your open calls to:
my $pid = open ( my $exec_fh, "|-", "executable" );
And then called waitpid on that $pid following your close then your zombies would finish. Test the return from waitpid to get an idea of which of your execs has errored (if any), which should help you track down why.
Alternatively - set $SIG{CHLD} = "IGNORE"; which will mean you - effectively - tell your child processes to 'just go away immediately' - but you won't be able to get a return code from them if they die.
I'm using threads in perl (5.12 ActiveState) to allow parallel and asyncronous writing on two different COM ports on Windows. This is how my code looks like:
#!/usr/bin/perl
use warnings;
use strict;
use Win32::SerialPort;
use threads;
my $ComPortObj = new Win32::SerialPort ("COM10") or die ("This is the bitter end...");
[... omit port settings ...]
my $ComPortObj2 = new Win32::SerialPort ("COM6") or die ("This is the bitter end...");
[... omit port settings ...]
my $s_read = "";
my $HangupThr = async
{
# printf("THREAD - Wait 3 seconds\n");
# sleep(3);
print("THREAD - write on COM10: AT\n");
$ComPortObj->write("AT\r") || die ("Unable to send command\n");
printf("THREAD - Wait 1 second\n");
sleep(1);
$s_read = $ComPortObj2->input;
# $s_read =~ s/\n/N/g;
# $s_read =~ s/\r/R/g;
print("THREAD - read from COM6: $s_read\n");
return 1;
};
$HangupThr->detach();
# printf("MAIN - Wait 4 seconds\n");
# sleep(4);
print("MAIN - write on COM6: AT\n");
$ComPortObj2->write("AT\r") || die ("Unable to send command\n");
printf("MAIN - Wait 1 second\n");
sleep(1);
$s_read = $ComPortObj->input;
# $s_read =~ s/\n/N/g;
# $s_read =~ s/\r/R/g;
print("MAIN - read from COM10: $s_read\n");
$ComPortObj->close();
$ComPortObj2->close();
What I get is an error when program exits. Complete output:
MAIN - write on COM6: AT
THREAD - write on COM10: AT
MAIN - Wait 1 second
THREAD - Wait 1 second
MAIN - read from COM10: AT
OK
THREAD - read from COM6: AT
OK
Error in PurgeComm at C:\userdata\Perl scripts\src\handler_error.pl line 0 thread 1
The operation completed successfully.
Error in GetCommTimeouts at C:\userdata\Perl scripts\src\handler_error.pl line 0 thread 1
Error Closing handle 184 for \\.\COM6
The handle is invalid.
Error closing Read Event handle 188 for \\.\COM6
The handle is invalid.
Error closing Write Event handle 192 for \\.\COM6
The handle is invalid.
Error in PurgeComm at C:\userdata\Perl scripts\src\handler_error.pl line 0 thread 1
The handle is invalid.
Error in GetCommTimeouts at C:\userdata\Perl scripts\src\handler_error.pl line 0 thread 1
Error Closing handle 144 for \\.\COM10
The handle is invalid.
Error closing Read Event handle 148 for \\.\COM10
The handle is invalid.
Error closing Write Event handle 180 for \\.\COM10
The handle is invalid.
This is related to serial port handlers purge, which I have no idea on how perl duplicates in threads. I've tried various close attempts in thread, main... without success. Furthermore I have to use the same ports both in main program and thread. Any suggestion to prevent these errors?
Many thanks!
You are dealing with Serial Ports and at any point only one process can have control on the serial ports(some terminal switches provide multiple login but thats not your case) In windows when one Process connects to COM it automatically disconnects other process. You can try this by trying to login to same COM port twice from the windows machine and the other port should disconnect which should lead to invalid handles you are seeing as error.
Other things you can try
Create Com Object Inside the thread, Use it and Destroy the object before accessing it on other thread
The function runtime.SetFinalizer(x, f interface{}) sets the finalizer associated with x to f.
What kind of objects are finalized by default?
What are some of the unintended pitfalls caused by having those objects finalized by default?
The following objects are finalized by default:
os.File: The file is automatically closed when the object is garbage collected.
os.Process: Finalization will release any resources associated with the process. On Unix, this is a no-operation. On Windows, it closes the handle associated with the process.
On Windows, it appears that package net can automatically close a network connection.
The Go standard library is not setting a finalizer on object kinds other than the ones mentioned above.
There seems to be only one potential issue that may cause problems in actual programs: When an os.File is finalized, it will make a call to the OS to close the file descriptor. In case the os.File has been created by calling function os.NewFile(fd int, name string) *File and the file descriptor is also used by another (different) os.File, then garbage collecting either one of the file objects will render the other file object unusable. For example:
package main
import (
"fmt"
"os"
"runtime"
)
func open() {
os.NewFile(1, "stdout")
}
func main() {
open()
// Force finalization of unreachable objects
_ = make([]byte, 1e7)
runtime.GC()
_, err := fmt.Println("some text") // Print something via os.Stdout
if err != nil {
fmt.Fprintln(os.Stderr, "could not print the text")
}
}
prints:
could not print the text
Just jump into the os.NewFile's source code:
// NewFile returns a new File with the given file descriptor and name.
func NewFile(fd uintptr, name string) *File {
fdi := int(fd)
if fdi < 0 {
return nil
}
f := &File{&file{fd: fdi, name: name}}
runtime.SetFinalizer(f.file, (*file).close) // <<<<<<<<<<<<<<
return f
}
When go runs GC, it will run Finalizers bind on that object.
When you open a new file, the go library will bind a Finalizer on that returned object for you.
When you are not sure what the GC will do to that object, jump to source code and check whether the library has set some finalizers on that object.
"What kind of objects are finalized by default?"
Nothing in Go is IMO finalized by default.
"What are some of the unintended pitfalls caused by having those objects finalized by default?"
As per above: none.