Take user input from the background - linux

What I'm trying to accomplish is to have a process running in background from a Linux terminal which takes user input and does things according to that input even if the terminal window is not focused, so I can work with other GUI applications, and then when I push some pre-defined buttons, something might alter the program's state without loosing the focus of my current window. Just as simple as that (not that simple for me though).
I don't ask for an specific kind of implementation. I'm fine with anything that may work: C, C++, Java, Linux Bash script... The only requisite is that it works under Linux.
Thank you very much

Well you can have your server read a FIFO or a unix domain socket (or even a message queue). Then write a client that takes command line input and writes it to the pipe/queue from some other terminal session. With FIFOs you can just echo input from the command line itself to the pipe but FIFOs come with their own headaches. The "push the button and magic happens" is a lot trickier but maybe that was badly phrased?

Related

Is there a way to make a bash script process messages that have been sent to it using the write command

Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file

How do I read and write repeatedly from a process in vim?

It was hard to phrase this as a question, but here is what I want to do:
I want vim to execute a process and to write to its stdin and read from its stdout file descriptors repeatedly. In other words, I want a back-and-forth dialogue between vim and another program.
I'll use cat as a simple example. If you run cat with no command-line arguments, then whatever you type on stdin is output to stdout after each newline character.
What I would like is to have a vim window which displays the most recent output of some program and to be able to write to its stdin upon certain events. So, unlike the following:
:read !cat
which waits for you to finish typing and press Ctrl-D to close cat's stdin, I want to display the output immediately after I press enter and to keep the process running so that I can type more.
Ultimately, I don't intend to be typing the input to the process; I want events (e.g. moving the cursor) to trigger vim to write specific commands to this process and display the output.
The reason I want the program to continue running instead of invoking the process once for each event is that the output to the program will be commands that generate state. If the program had to be invoked for each command, it would have to save the state to a file and read it in each time.
An alternative I am considering: writing the program to listen on a port. Then, vim invokes a command that simply opens the socket and passes the vim command to the program and returns the message from the program. This would require me writing two programs, though, which I hope is unnecessary.
What I am trying to do here is write a tool that analyses your code and provides an interactive command-line interface (e.g commands like that do things like "output a list of all the lines which set this variable). However, rather than running this program in a separate terminal or screen session, I would like vim to be able to integrate the output of this program in a window, if that is possible.
You should check out vimproc. You can use vimproc#popen3 to start the process. vimproc#popen3 returns an object(=dictionary) with a stdin member field that has a write method and a stdout member field that has a read method.
The problem is how to trigger the reading and writing. Vim is single thread, so you'll have to rely on autocmd events. Obviously you'll want to try reading whenever you write something(just in case), but you should also use the CursorHold event.
You can also use Python for IO. While it seems like you can use Python threading to trigger the reading, I would advise against it as Vim was never built for multithreading and from my experience, trying to hack multithreading into it with Python threads often causes race conditions and crashes Vim.

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

invoking less application from GNU readline

Bit support question. Apologies for that.
I have an application linked with GNU readline. The application can invoke shell commands (similar to invoking tclsh using readline wrapper). When I try to invoke the Linux less command, I get the following error:
Suspend (tty output)
I'm not an expert around issues of terminals. I've tried to google it but found no answer. Does any one know how to solve this issue?
Thanks.
You probably need to investigate the functions rl_prep_terminal() and rl_deprep_terminal() documented in the readline manual:
Function: void rl_prep_terminal(int meta_flag)
Modify the terminal settings for Readline's use, so readline() can read a single character at a time from the keyboard. The meta_flag argument should be non-zero if Readline should read eight-bit input.
Function: void rl_deprep_terminal(void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which it was before the most recent call to rl_prep_terminal().
The less program is likely to get confused if the terminal is already in the special mode used by the Readline library and it tries to tweak the terminal into an equivalent mode. This is a common problem for programs that work with the curses library, or other similar libraries that adjust the terminal status and run other programs that also do that.
Whilst counterintuitive it may be stopped waiting for input (some OSs and shells give Stopped/Suspended (tty output) when you might expect it to refer to (tty input)). This would fit the usual behaviour of less when it stops at the end of (what it thinks is) the screen length.
Can you use cat or head instead? or feed less some input? or look at the less man/info pages to see what options to less might suit your requirement (e.g w, z, F)?
Your readline application is making itself the controlling app for your tty.
When you invoke less from inside the application, it wants to be in control of the tty as well.
If you are trying to invoke less in your application to display a file for the user,
you want to set the new fork'd process into it's own process group before calling exec.
You can do this with setsid(). Then when less call tcsetpgrpp(), it will not get
thrown into the backgroud with SIGTTOU.
When less finishes, you'll want to restore the foregroud process group with tcsetpgrp(), as well.

Linux - communicating with a process? rejoin process in action?

I feel somewhat dumb asking this, but I'm relatively new to linux (more in terms of experience than time), and one thing that i've always wondered is if I can 'rejoin' (my own term) a process while it's running.
For example, if I set a game server or eggdrop IRC bot to run in the background, is there a command I can use to view that process in action and view all the output it delivers to the console?
I'm not talking about just viewing the process using the 'top' command, but actually linking to it as if I just ran it from the command line.
Thanks.
Debuggers can "attach" to running processes, but you might be better running your program in screen (which lets you detach and reattach to terminal in a fairly natural way).
There might be some good stuff good stuff in :
Redirect STDERR / STDOUT of a process AFTER it’s been started, using command line?
Can you be more specific? Are you just talking about backgrounding a process in the current session, then putting it back in the foreground.
E.g.:
doLongTask &
# Later
fg %3
3 in this example is the job number of this instance of doLongTask. You can see all running jobs with:
jobs
But note this will still only let you see what's being outputted to the console. I.E. stdout and stderr, minus any redirections.
The simple answer is:
>> ./runmyserver
<press ctrl-z>
>> bg
>> ...do something else ...
>> fg
You can also start in the background with:
>> ./runmyserver &
For more complicated stuff like disconnecting the server from your console session (so it's still running when you log out) you really want screen. Maybe beg them for it, it isn't really a security risk and it's a useful program to have around.
Also note that ctrl-z will actually pause your server until bg so if people are playing on it might skip a beat, best to do it quickly.
Finally, many game servers have a remote login for this kind of thing which would solve many of these issues. Make sure your game and host don't support this before looking for alternatives.
EDIT: Re-read your question. It sounds like you could at least get the output using redirect to a file. This won't let you add more input though:
./runmyserver > log.txt
If you know ahead of time that you want to do this, use screen(1) and run your server in the foreground in a screen session. You will be able to detach from your screen session and have the process keep running. You can then later re-attach your screen session and view any output it has made since, up to the size of the scrollback buffer.

Resources