How do I read and write repeatedly from a process in vim? - vim

It was hard to phrase this as a question, but here is what I want to do:
I want vim to execute a process and to write to its stdin and read from its stdout file descriptors repeatedly. In other words, I want a back-and-forth dialogue between vim and another program.
I'll use cat as a simple example. If you run cat with no command-line arguments, then whatever you type on stdin is output to stdout after each newline character.
What I would like is to have a vim window which displays the most recent output of some program and to be able to write to its stdin upon certain events. So, unlike the following:
:read !cat
which waits for you to finish typing and press Ctrl-D to close cat's stdin, I want to display the output immediately after I press enter and to keep the process running so that I can type more.
Ultimately, I don't intend to be typing the input to the process; I want events (e.g. moving the cursor) to trigger vim to write specific commands to this process and display the output.
The reason I want the program to continue running instead of invoking the process once for each event is that the output to the program will be commands that generate state. If the program had to be invoked for each command, it would have to save the state to a file and read it in each time.
An alternative I am considering: writing the program to listen on a port. Then, vim invokes a command that simply opens the socket and passes the vim command to the program and returns the message from the program. This would require me writing two programs, though, which I hope is unnecessary.
What I am trying to do here is write a tool that analyses your code and provides an interactive command-line interface (e.g commands like that do things like "output a list of all the lines which set this variable). However, rather than running this program in a separate terminal or screen session, I would like vim to be able to integrate the output of this program in a window, if that is possible.

You should check out vimproc. You can use vimproc#popen3 to start the process. vimproc#popen3 returns an object(=dictionary) with a stdin member field that has a write method and a stdout member field that has a read method.
The problem is how to trigger the reading and writing. Vim is single thread, so you'll have to rely on autocmd events. Obviously you'll want to try reading whenever you write something(just in case), but you should also use the CursorHold event.
You can also use Python for IO. While it seems like you can use Python threading to trigger the reading, I would advise against it as Vim was never built for multithreading and from my experience, trying to hack multithreading into it with Python threads often causes race conditions and crashes Vim.

Related

Is there a Linux command that writes its command line arguments to a file, without using a shell?

I have a batch processing system that can execute a number of commands sequentially. These commands are specified as list of words, that are executed by python's subprocess.call() function, without using a shell. For various reasons I do not want to change the processing system.
I would like to write something to a file, so a subsequent command can use it. Unfortunately, all the ways I can think of to write something to the disk involve some sort of redirection, which is a shell concept.
So is there a way to write a Linux command line that will take its argument and write it to a file, in a context where it is executed outside a shell?
Well, one could write a generalised parser and process manager that could handle this for you, but, luckily, one already comes with Linux. All you have to do is tell it what command to run, and it will handle the redirection for you.
So, if you were to modify your commands a bit, you could easily do this. Just concatenate the words together with strings, quoting when those words may have spaces or other special characters in them, and then you can use a list such as:
/bin/sh, -c, {your new string here} > /some/file
Et voila, stuff written to disk. :)
Looking at the docs for subprocess.call, I see it has extra parameters:
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)
If you specify stdout= to a file you have opened, then the output of your code will go to that file, which is basically the same behaviour?
I don't see your exact usage case, but this is certainly a way to synthesise the command-line pipe behaviours, with little coding change.
Note that the docs also say that you should not use the built-in =PIPE support, depending on your exact requirements. It is important that you read data from a pipe regularly or the writer will stall when the buffer is full.

Restricting pipes and redirects (Python3)

I have a program that takes standard input from the user and runs through the command line. Is there someway to make a program ignore pipes and redirects?
For example: python program.py < input.txt > output.txt would just act as if you put in python program.py
There is no simple way to find the terminal the user launched you with in the general case. There are some techniques you can use, but they will not always work.
You can use os.isatty() to detect whether a file (such as sys.stdin or sys.stdout) appears to be an interactive terminal session. It is possible you are hooked up to a terminal session other than the one the user used to launch your program, so this is not foolproof. Such a terminal session might even be under the control of a program rather than a human.
Under Unix, processes have a notion of a "controlling terminal." You may be able to talk to that via os.ctermid(). But the user can manipulate this value before launching your process. You also may not have a controlling terminal at all, e.g. if running as a daemon.
You can inspect the parent process and see if any of its file descriptors are hooked up to terminal sessions. Unfortunately, I'm not aware of any cross-platform way to do that. On Linux, I'd start with os.getppid() and the /proc filesystem (see proc(5)). If the parent process has exited (e.g. the user ran your_program.py & disown; exit under bash), this will not work. But in that case, there isn't much you can do anyway.

Take user input from the background

What I'm trying to accomplish is to have a process running in background from a Linux terminal which takes user input and does things according to that input even if the terminal window is not focused, so I can work with other GUI applications, and then when I push some pre-defined buttons, something might alter the program's state without loosing the focus of my current window. Just as simple as that (not that simple for me though).
I don't ask for an specific kind of implementation. I'm fine with anything that may work: C, C++, Java, Linux Bash script... The only requisite is that it works under Linux.
Thank you very much
Well you can have your server read a FIFO or a unix domain socket (or even a message queue). Then write a client that takes command line input and writes it to the pipe/queue from some other terminal session. With FIFOs you can just echo input from the command line itself to the pipe but FIFOs come with their own headaches. The "push the button and magic happens" is a lot trickier but maybe that was badly phrased?

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

invoking less application from GNU readline

Bit support question. Apologies for that.
I have an application linked with GNU readline. The application can invoke shell commands (similar to invoking tclsh using readline wrapper). When I try to invoke the Linux less command, I get the following error:
Suspend (tty output)
I'm not an expert around issues of terminals. I've tried to google it but found no answer. Does any one know how to solve this issue?
Thanks.
You probably need to investigate the functions rl_prep_terminal() and rl_deprep_terminal() documented in the readline manual:
Function: void rl_prep_terminal(int meta_flag)
Modify the terminal settings for Readline's use, so readline() can read a single character at a time from the keyboard. The meta_flag argument should be non-zero if Readline should read eight-bit input.
Function: void rl_deprep_terminal(void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which it was before the most recent call to rl_prep_terminal().
The less program is likely to get confused if the terminal is already in the special mode used by the Readline library and it tries to tweak the terminal into an equivalent mode. This is a common problem for programs that work with the curses library, or other similar libraries that adjust the terminal status and run other programs that also do that.
Whilst counterintuitive it may be stopped waiting for input (some OSs and shells give Stopped/Suspended (tty output) when you might expect it to refer to (tty input)). This would fit the usual behaviour of less when it stops at the end of (what it thinks is) the screen length.
Can you use cat or head instead? or feed less some input? or look at the less man/info pages to see what options to less might suit your requirement (e.g w, z, F)?
Your readline application is making itself the controlling app for your tty.
When you invoke less from inside the application, it wants to be in control of the tty as well.
If you are trying to invoke less in your application to display a file for the user,
you want to set the new fork'd process into it's own process group before calling exec.
You can do this with setsid(). Then when less call tcsetpgrpp(), it will not get
thrown into the backgroud with SIGTTOU.
When less finishes, you'll want to restore the foregroud process group with tcsetpgrp(), as well.

Resources