I want to implement a simple editor with Haskell.
My basic idea was to open an xterm instance and then send it the content the editor should display (text + e.g. coloring, cursor position, etc). The content could then simply be rewritten on every key stroke.
I managed to open xterm in a subprocess and let it display the contents of a file (see code below); however, writing to its stdin does not seem to work (I don't get any error, but the text doesn't show up in the xterm window either). Then, I tried to run simple shell commands like ls or cat - with those, the interaction via the streams does work.
Question: How can my Haskell process interact with the created xterm instance?
import System.IO
import System.Process
main = do
(Just hin, Just hout, Just herr, jHandle) <-
createProcess (proc "xterm" ["-e", "tail", "-f", "foo.txt"])
{ cwd = Just "."
, std_in = CreatePipe
, std_out = CreatePipe
, std_err = CreatePipe
}
hPutStrLn hin "This should be printed to xterm"
waitForProcess jHandle
Unlike with cat and grep, reading and writing from an xterm is not done from the usual standard streams. You need to open a pseudo-terminal (using functions from System.Posix.Terminal), hook the slave part to the xterm by means of the -S argument, and then read/write from the master part.
About the -S argument, the xterm manpage says:
This option allows xterm to be used as an input and output channel for an existing program and is sometimes used in specialized applications. The option value specifies the last few letters of the name of a pseudo-terminal to use in slave mode, plus the number of the inherited file descriptor. If the option contains a “/” character, that delimits the characters used for the pseudo-terminal name from the file descriptor.
More information here and here.
Some example code:
import Control.Monad
import Control.Applicative
import System.IO
import System.FilePath
import System.Posix.Terminal
import System.Posix.Types
import System.Posix.IO
import System.Process
main :: IO ()
main = do
(master#(Fd mfd),slave#(Fd sfd)) <- openPseudoTerminal
-- deactivating echo seems to be neccessary
slaveattr <- flip withoutMode EnableEcho <$> getTerminalAttributes slave
setTerminalAttributes slave slaveattr Immediately
sname <- getSlaveTerminalName master
let sbasename = takeFileName sname
sargs = "-S" ++ sbasename ++ "/" ++ show sfd
(Just _, Just _, Just _, _) <- createProcess $
(proc "xterm" [sargs] ) { cwd = Just "."
, std_in = CreatePipe
, std_out = CreatePipe
, std_err = CreatePipe
}
h <- fdToHandle master
hSetBuffering h NoBuffering
-- read and print the initial line sent by xterm
hGetLine h >>= putStrLn
hPutStrLn h "this should appear in the xterm"
-- this should appear both in console and xterm
forever $ hGetChar h >>= \c -> (putChar c >> hPutChar h c)
Related
I am using System.IO.hIsTerminalDevice to determine if a Handle is a terminal and apply colorization if this is the case. I noticed that when forking a process using CreatePipe as stream for new process stdin and stdout, this function returns True which seems to be the wrong answer: A pipe should not be considered as a terminal. I have tried to track down the issue looking at System.IO and System.POSIX.IO source code but it ends up in C pipe function and lead me nowhere.
Is there a better way to tell if a handle is a terminal? Or am I doing something wrong?
Update
Here are 2 programs that are supposed to expose the behaviour I observed:
import Control.Monad
import System.IO
import System.Process
main = do
(in_, Just out, Just err, h) <- createProcess $ (proc "./test2" [])
{ std_in = CreatePipe
, std_err = CreatePipe
, std_out = CreatePipe }
dump out
where
dump h = forever $ do
ln <- hGetLine h
putStrLn ln
Then `test2 :
import System.IO
main = do
print =<< hIsTerminalDevice stderr
print =<< hIsTerminalDevice stdout
print =<< hIsTerminalDevice stdin
Then running the first program:
$ ./test
False
False
False
I know what's happening: What I am forking is not the program itself but a docker container! And I explicitly add -t parameter which allocates a tty for the container...
Do you have a minimal example to illustrate the problem? The following program prints "False" twice, suggesting that handles created with CreatePipe are not misidentified as terminal devices:
import System.Process
import System.IO
main = do (Just in_, Just out, err, h) <-
createProcess $ (shell "cat") { std_in = CreatePipe,
std_out = CreatePipe }
print =<< hIsTerminalDevice in_
print =<< hIsTerminalDevice out
I'm trying to pipe the stdin of my program to an external process using the following
import System.IO
import System.Posix.IO
import System.Posix.Process
import qualified System.Process as P
import Control.Concurrent (forkIO, killThread)
import Control.Monad
main :: IO ()
main = do
h <- fdToHandle stdInput
(Just hIn, _, _, p) <-
P.createProcess (P.shell "runhaskell echo.hs"){ P.std_in = P.CreatePipe }
hSetBuffering hIn NoBuffering
tid <- forkIO $ getInput hIn
e <- P.waitForProcess p
killThread tid
print e
getInput hin = do
forever $ do
l <- getLine
hPutStrLn hin l
where echo.hs just echoes stdin to stdout, but if I wait a couple seconds between giving new input, I get the following error:
pipes.hs: <stdin>: hGetLine: invalid argument (Bad file descriptor)
when I tried compiling with ghc pipes.hs, the compiled program would not redirect stdin to the stdin of echo.hs at all
Your fdToHandler stdInput call creates a new Handle pointing at file descriptor 0 (stdin) of the original process. After a bit of time, the garbage collector notices that it's no longer being used, and garbage collects the Handle, which in turn causes the underlying file descriptor to be closed. Then your getLine (which uses System.IO.stdin) call fails. That's because that Handle is still open, but the underlying file descriptor it's pointing at has been closed.
FWIW, I'd recommend using binary I/O on the handles to avoid issues with character encodings.
I want to read a file, process it, and write the results to another file; the input file name is to be supplied through a console argument, and the output file name is generated from the input file name.
The catch is I want it to transparently “fail over” to stdin/stdout if no arguments are supplied; essentially, in case a file name is supplied, I redirect stdin/stdout to the respective file names so I can transparently use interact whether the file name was supplied or not.
Here's the code hacked together with dummy output in a superfluous else. What will be the proper, idiomatic form of doing it?
It probably could have something to do with Control.Monad's when or guard, as was pointed out in a similar question, but maybe somebody wrote this already.
import System.IO
import Data.Char(toUpper)
import System.Environment
import GHC.IO.Handle
main :: IO ()
main = do
args <- getArgs
if(not $ null args) then
do
print $ "working with "++ (head args)
finHandle <- openFile (head args) ReadMode --open the supplied input file
hDuplicateTo finHandle stdin --bind stdin to finName's handle
foutHandle <- openFile ((head args) ++ ".out") WriteMode --open the output file for writing
hDuplicateTo foutHandle stdout --bind stdout to the outgoing file
else print "working through stdin/redirect" --get to know
interact ((++) "Here you go---\n" . map toUpper)
There's nothing very special about interact - here is its definition:
interact :: (String -> String) -> IO ()
interact f = do s <- getContents
putStr (f s)
How about something like this:
import System.Environment
import Data.Char
main = do
args <- getArgs
let (reader, writer) =
case args of
[] -> (getContents, putStr)
(path : _) -> let outpath = path ++ ".output"
in (readFile path, writeFile outpath)
contents <- reader
writer (process contents)
process :: String -> String
process = (++) "Here you go---\n" . map toUpper
Based on the command line arguments we set reader and writer to the IO-actions which will read the input and write the output.
This seems fairly idiomatic to me already. The one note I have is to avoid head, as it is an unsafe function (it can throw a runtime error). In this case it is fairly easy to do so by using case to pattern match.
main :: IO ()
main = do
args <- getArgs
case args of
fname:_ -> do
print $ "working with " ++ fname
finHandle <- openFile fname ReadMode
hDuplicateTo finHandle stdin
foutHandle <- openFile (fname ++ ".out") WriteMode
hDuplicateTo foutHandle stdout
[] -> do
print "working through stdin/redirect"
interact ((++) "Here you go---\n" . map toUpper)
I want to create a process and write some text from my haskell program into the process's stdin periodically (from an IO action).
The following works correctly in GHCi but don't work correctly when built and run. In GHCi everything works perfectly and the value from the IO action is fed in periodically. When built and run however, it seems to pause for arbitrarily long periods of time when writing to stdin of the process.
I've used CreateProcess (from System.Process) to create the handle and tried hPutStrLn (bufferent set to NoBuffering -- LineBuffering didnt work either).
So I'm trying the process-streaming package and pipes but can't seem to get anything to work at all.
The real question is this: How do i create a process from haskell and write to it periodically?
Minimal example that exhibits this behavior:
import System.Process
import Data.IORef
import qualified Data.Text as T -- from the text package
import qualified Data.Text.IO as TIO
import Control.Concurrent.Timer -- from the timers package
import Control.Concurrent.Suspend -- from the suspend package
main = do
(Just hin, _,_,_) <- createProcess_ "bgProcess" $
(System.Process.proc "grep" ["10"]) { std_in = CreatePipe }
ref <- newIORef 0 :: IO (IORef Int)
flip repeatedTimer (msDelay 1000) $ do
x <- atomicModifyIORef' ref $ \x -> (x + 1, x)
hSetBuffering hin NoBuffering
TIO.hPutStrLn hin $ T.pack $ show x
Any help will be greatly appreciated.
This is a pipes Producer that emits a sequence of numbers with a second delay:
{-# language NumDecimals #-}
import Control.Concurrent
import Pipes
import qualified Data.ByteString.Char8 as Bytes
periodic :: Producer Bytes.ByteString IO ()
periodic = go 0
where
go n = do
d <- liftIO (pure (Bytes.pack (show n ++ "\n"))) -- put your IO action here
Pipes.yield d
liftIO (threadDelay 1e6)
go (succ n)
And, using process-streaming, we can feed the producer to an external process like this:
import System.Process.Streaming
main :: IO ()
main = do
executeInteractive (shell "grep 10"){ std_in = CreatePipe } (feedProducer periodic)
I used executeInteractive, which sets std_in automatically to NoBuffering.
Also, if you pipe std_out and want to process each match immediately, be sure to pass the --line-buffered option to grep (or use the stdbuf command) to ensure that matches are immediately available at the output.
What about using threadDelay, e.g.:
import Control.Monad (forever)
import Control.Concurrent (threadDelay)
...
forever $ do
x <- atomicModifyIORef' ref $ \x -> (x + 1, x)
hSetBuffering hin NoBuffering
TIO.hPutStrLn hin $ T.pack $ show x
threadDelay 1000000 -- 1 sec
Spawn this off in another thread if you need to do other work at the same time.
You can remove he need for the IORef with:
loop h x = do
hSetBuffering h NoBuffering
TIO.hPutStrLn h $ T.pack $ show x
threadDelay 1000000
loop h (x+1)
And, of course, you only need to do the hSetBuffering once - e.g. do it just before you enter the loop.
I want to invoke a process from within a haskell program and capture stdout as well as stderr.
What I do:
(_, stdout, stderr) <- readProcessWithExitCode "command" [] ""
The problem: This way, stdout and stderr are captured separately, however I want the messages to appear in the right place (otherwise I would simply stdout ++ stderr which separates error messages from their stdout counterparts).
I do know that I could achieve this if I'd pipe the output into a file, i.e.
tmp <- openFile "temp.file" ...
createProcess (proc "command" []) { stdout = UseHandle tmp,
stderr = UseHandle tmp }
So my current workaround is to pipe outputs to a tempfile and read it back in. However I'm looking for a more direct approach.
If I was on unix for sure I'd simply invoke a shell command á la
command 2>&1
and that's it. However, I'd like to have this as portable as possible.
What I need this for: I've built a tiny haskell cgi script (just to play with it) which invokes a certain program and prints the output. I want to html-escape the output, thus I can't simply pipe it to stdout.
I was thinking: Maybe it's possible to create an in-memory-handle, like a PipedInputStream/PipedOutputStream in Java, or ArrayInputStream/ArrayOutputStream which allows for processing IO streams within memory. I looked around for a function :: Handle on hoogle, but did not find anything.
Maybe there is another Haskell module out there which allows me to merge two streams?
You can use pipes to concurrently merge two input streams. The first trick is to read from two streams concurrently, which you can do using the stm package:
import Control.Applicative
import Control.Proxy
import Control.Concurrent
import Control.Concurrent.STM
import System.Process
toTMVarC :: (Proxy p) => TMVar a -> () -> Consumer p a IO r
toTMVarC tmvar () = runIdentityP $ forever $ do
a <- request ()
lift $ atomically $ putTMVar tmvar a
fromTMVarS :: (Proxy p) => TMVar a -> () -> Producer p a IO r
fromTMVarS tmvar () = runIdentityP $ forever $ do
a <- lift $ atomically $ takeTMVar tmvar
respond a
I will soon provide the above primitives in a pipes-stm package, but use the above for now.
Then you just feed each Handle to a separate MVar and read from both concurrently:
main = do
(_, mStdout, mStderr, _) <- createProcess (proc "ls" [])
case (,) <$> mStdout <*> mStderr of
Nothing -> return ()
Just (stdout, stderr) -> do
out <- newEmptyTMVarIO
err <- newEmptyTMVarIO
forkIO $ runProxy $ hGetLineS stdout >-> toTMVarC out
forkIO $ runProxy $ hGetLineS stderr >-> toTMVarC err
let combine () = runIdentityP $ forever $ do
str <- lift $ atomically $
takeTMVar out `orElse` takeTMVar err
respond str
runProxy $ combine >-> putStrLnD
Just change out putStrLnD with however you want to process the input.
To learn more about the pipes package, just read Control.Proxy.Tutorial.
For posix system you can use createPipe and fdToHandle in System.Posix.IO to create a pair of new handles (I'm not sure where to close those handles and fds though..):
readProcessWithMergedOutput :: String -> IO (ExitCode, String)
readProcessWithMergedOutput command = do
(p_r, p_w) <- createPipe
h_r <- fdToHandle p_r
h_w <- fdToHandle p_w
(_, _, _, h_proc) <- createProcess (proc command [])
{ std_out = UseHandle h_w
, std_err = UseHandle h_w
}
ret_code <- waitForProcess h_proc
content <- hGetContents h_r
return (ret_code, content)
For windows, this post implemented a cross-platform createPipe.