Consider the following Haskell function:
eraseFile :: FilePath -> IO ()
eraseFile basename =
do let cmd' = ">"
args' = ("/path/to/file/" ++ basename) :: String
(exitcode', stdout', stderr') <- readProcessWithExitCode cmd' [args'] ""
return ()
When I try to run this in a stack ghci repl, or from the main function, I get a permission denied error from the console. Normally, in a bash console, you could just run this command as sudo, but this doesn't seem to work when invoked from Haskell.
Question: How to execute system commands in Haskell as root?
As already pointed out in the comments, you can just run the entire stack/ghc under root, but I daresay that's a bad idea. Preferrably, I'd just invoke sudo as a process from within your program. The particular command – emptying a file, if I have understood that correctly? – is then easiest done with tee:
do let cmd' = "sudo"
args' = ["tee", "/path/to/file/" ++ basename :: String]
(exitcode', stdout', stderr') <- readProcessWithExitCode cmd' args' ""
As Zeta remarks, truncate --size 0 would probably be a cleaner command.
To get around password entering, you probably also want to make an exception in the sudoers file. It's a hairy matter; of course the really best thing would be if you could avoid needing root permissions altogether.
Related
I'm new to Haskell and I have been using system to execute shell commands for me, since I'm only interested in exit status. I noticed that since I can do something like:
ls <- system "ls"
pwd <- system "pwd"
and this correctly executes the two commands. I was thinking about executing an array of these commands eg
lsAndPwd <- return $ system <$> ["ls", "pwd"]
and I'm surprised that this doesn't actually execute anything. This compiles and I checked that lsAndPwd has the correct type of [IO GHC.IO.Exception.ExitCode] but the commands are never executed. What's going on and how can I get this to work?
You should use mapM instead of <$> so you get an IO [ExitCode], not an [IO ExitCode]. This way, it's collected into one IO monad that you can run:
lsAndPwd <- mapM system ["ls", "pwd"]
Alternatively, call sequence on the whole thing:
lsAndPwd <- sequence $ system <$> ["ls", "pwd"]
I'm fairly new to NixOS, and am trying to invoke emacs from a Haskell program using the following function:
ediff :: String -> String -> String -> IO ()
ediff testName a b = do
a' <- writeSystemTempFile (testName ++ ".expected") a
b' <- writeSystemTempFile (testName ++ ".received") b
let quote s = "\"" ++ s ++ "\""
callCommand $ "emacs --eval \'(ediff-files " ++ quote a' ++ quote b' ++ ")\'"
When I run the program that invokes this command using stack test, I get the following result (interspersed with unit test results):
/bin/sh: emacs: command not found
Exception: callCommand: emacs --eval '(ediff-files "/run/user/1000/ast1780695788709393584.expected" "/run/user/1000/ast4917054031918502651.received")'
When I run the command that failed to run above from my shell, it works flawlessly. How can I run processes from Haskell in NixOS, as though I had invoked them directly, so that they can access the same commands and configurations as my user?
Both your shell and callCommand use the PATH environment variable, so it seems like stack is changing that. It turns out that stack uses a pure nix shell by default, but you also want to access your user environment, which is 'impure'.
To quote the stack documenation
By default, stack will run the build in a pure Nix build environment (or shell), which means the build should fail if you haven't specified all the dependencies in the packages: section of the stack.yaml file, even if these dependencies are installed elsewhere on your system. This behaviour enforces a complete description of the build environment to facilitate reproducibility. To override this behaviour, add pure: false to your stack.yaml or pass the --no-nix-pure option to the command line.
Another solution is to add Emacs to nix.dependencies in stack.yaml (thanks #chepner). It has the benefit that some version of Emacs will always be available when a developer runs the tests, but that Emacs may not be the Emacs they want to use. You may be able to work around that using something like ~/.config/nixpkgs/config.nix, unless they have configured their Emacs elsewhere, like the system configuration or perhaps a home manager. I'd prefer the simple but impure $PATH solution.
I'm trying to run the elm-reactor project, which is written in Haskell. It fails because it's trying to proc out to the elm command like this:
createProcess (proc "elm" $ args fileName)
My elm executable is sitting in ~/.cabal/bin, which is in my PATH.
The System.Process.proc command searches the $PATH for its command argument, but it doesn't do tilde (~) expansion, so it doesn't find elm.
System.Process.shell has the opposite problem. It does tilde expansion, but it doesn't search the $PATH, apparently.
From the source of the System.Process command, it looks like most everything rests on a foreign ccall to "runInteractiveProcess", which I assume is doing whatever $PATH searching is being done. I don't know where the source for runInteractiveProcess would be, and my C is about 15 years worth of rusty.
I can work around this issue by
a) adding the fully-expanded cabal/bin path to my PATH or
b) symlinking an elm from the working directory to its location in cabal/bin.
However, I'd like to offer a suggested fix to the elm project, to save future adopters the trouble I've gone through. Is there a System.Process call that they should be making here that I haven't tried? Or is there a different method they should be using? I suppose at worst they could getEnv for the PATH and HOME, and implement their own file search using that before calling proc - but that breaks cross-platform compatibility. Any other suggestions?
Try using shell instead of proc, i.e.:
createProcess (shell "elm")
This should invoke elm via a shell, which hopefully will interpret tildes in $PATH as desired.
Update: Here is the experiment I performed to test what shell does...
Compile the following program (I called it run-foofoo):
import System.Process
main = do
(,,_,h) <- createProcess $ shell "foofoo"
ec <- waitForProcess h
print ec
Create a new directory ~/new-bin and place the following perl script there as the file foofoo:
#!/usr/bin/perl
print "Got here and PATH is $ENV{PATH}\n";
Run: chmod a+rx ~/new-bin/foofoo
Test with:
PATH="/bin:/usr/bin:/sbin" ./run-foofoo # should fail
PATH="$HOME/new-bin:/bin:/usr/bin:/sbin" ./run-foofoo # should succeed
PATH="~/new-bin:/bin:/usr/bin:/sbin" ./run-foofoo # ???
On my OSX system, the third test reports:
Got here and PATH is ~/new-bin:/bin:/usr/bin:/sbin
ExitSuccess
I have a bit of code in my haskell program like so:
evaluate :: String -> IO ()
evaluate = ...
repl = forever $ do
putStr "> " >> hFlush stdout
getLine >>= evaluate
Problem is, when I press the delete key (backspace on windows), instead of deleting a character from the buffer, I get a ^? character instead. What's the canonical way of getting delete to delete a character when reading from stdin? Similarly, I'd like to be able to get the arrow keys to move a cursor around, etc.
Compile the program and then run the compiled executable. This will give the correct behavior for the Delete key. For some reason interpreting the program screws up the use of Delete.
To compile the program, just invoke ghc like this:
$ ghc -O2 myProgram.hs
This will generate a myProgram executable that you can run from the command line:
$ ./myProgram
That will then give the correct behavior for Delete.
I'm trying to write a logging shell; e.g. one that captures data about commands that are being run in a structured format. To do this, I'm using readline to read in commands and then executing them in a subshell whilst capturing things such as the time taken, the environment, the exit status and so on.
So far so good. However, initial attempts to run things such as vi or less from within this logging shell failed. Investigation suggested that the thing to do was to establish a pseudo-tty and connect the subshell to that rather than to a normal pipe. This stops vi complaining about not being connected to a terminal, but still fails - I get some nonsense printed to the screen and commands print as characters in the editor - e.g. 'ESC' just displays ^[.
I thought that what I needed to do was put the pty in raw mode. To do this, I tried the following:
pty <- do
parentTerminal <- getControllingTerminalName >>=
\a -> openFd a ReadWrite Nothing defaultFileFlags
sttyp <- getTerminalAttributes parentTerminal
(a, b) <- openPseudoTerminal
let rawModes = [ProcessInput, KeyboardInterrupts, ExtendedFunctions,
EnableEcho, InterruptOnBreak, MapCRtoLF, IgnoreBreak,
IgnoreCR, MapLFtoCR, CheckParity, StripHighBit,
StartStopOutput, MarkParityErrors, ProcessOutput]
sttym = withoutModes rawModes sttyp
withoutModes modes tty = foldl withoutMode tty modes
setTerminalAttributes b sttym Immediately
setTerminalAttributes a sttym Immediately
a' <- fdToHandle a
b' <- fdToHandle b
return (a',b')
E.g. we get the parent terminal's attributes, remove the various flags that I think correspond to setting the tty into raw mode (based on this code and the haddock for System.Posix.Terminal), and then set these on both sides of the pty.
I then start up a process within a shell using createProcess and use waitForProcess to connect to it, giving the slave side of the pty for the stdin and stdout handles on the child process:
eval :: (Handle, Handle) -> String -> IO ()
eval pty command = do
let (ptym, ptys) = pty
(_, _, hErr, ph) <- createProcess $ (shell command) {
delegate_ctlc = True
, std_err = CreatePipe
, std_out = UseHandle ptys
, std_in = UseHandle ptys
}
snipOut <- tee ptym stdout
snipErr <- sequence $ fmap (\h -> tee h stderr) hErr
exitCode <- waitForProcess ph
return ()
where tee :: Handle -> Handle -> IO B.ByteString
tee from to = DCB.sourceHandle from
$= DCB.conduitHandle to -- Sink contents to out Handle
$$ DCB.take 256 -- Pull off the start of the stream
This definitely changes terminal settings (confirmed with stty), but doesn't fix the problem. Am I missing something? Is there some other device I need to set attributes on?
Edit: The full runnable code is available at https://github.com/nc6/tabula - I've simplified a few things for this post.
This is how you create the vi process:
(_, _, hErr, ph) <- createProcess $ (shell command) {
Those return values are stdin/stdout/stderr. You throw away stdin/stdout (and keep stderr). You will need those to communicate with vi. Basically, when you type in ESC, it isn't even getting to the process.
As a larger architectural note- You are rewriting not just the terminal code, but a full REPL/shell script.... This is a larger project than you probably want to get into (go read the bash manual to see all the stuff they needed to implement). You might want to consider wrapping around a user choosable shell script (like bash). Unix is pretty modular this way, that is why xterm, ssh, the command prompt etc all work the same way- they proxy the chosen shell script, rather than each write their own.
#jamshidh pointed out that I wasn't actually connecting my stdin to the master side of the pty, so the issues I was getting were nothing to do with vi or terminal modes, and entirely to do with not passing in any input!