observing stdout of another process - linux

Here's the hypothetical scenario:
I'm running a test script on some hardware attached to box A, which I have root access to. This test script requires minimal user input (flip a switch every half hour or so). About an hour and a half into the test process, I realize that this script takes a long, long time to finish, to the tune of eight hours. Box A is located in a very cold, loud server room that is generally not that fun to physically occupy. Box B is located in my office, where I have a nice comfy chair and an endless supply of hot pockets. I want some way to monitor the output of the process running on box A from an ssh session in box B so I know when to go flip the switch, but I don't want to restart the testing process. Had I known to start with that the test would take so long to finish, I would have just piped its output to a log file and tail'd that file from my box B ssh session. If I know the PID of the process running on box A, is it possible to observe the stdout of that process from another session?
Of course, I could just run vnc on box A and log in from box B to take a look at the output, but that defeats the purpose of this hypothetical, which is to learn more about how process pipes, stdout, and output in general work in a Linux environment.
Thoughts?

http://ingvar.blog.linpro.no/2010/07/10/changing-a-process-file-descriptor-on-the-fly/

You might want to check out expect
It is useful for automating these kinds of interactions.
You could also redirect output of the script to file and monitor said file from another ssh session. I bet the brainy guys on stackoverflow can name about 6 other ways to do it too. :)

Related

Respond Y dynamically to a shell program

We have a startup script for an application (Owned and developed by different team but deployments are managed by us), which will prompt Y/N to confirm starting post deployment. But the number of times it will prompt will vary, depends on changes in the release.
So the number of times it will prompt would vary from 1 to N (Might be even 100 or more than that).
We have automated the deployment and startup using Jenkins shell script jobs. But startup prompts number is hardcoded to 20 which might be more at sometime.
Could anyone please advise how number of prompts can be handled dynamically. We need to pass Y whenever there is pattern in the output "Do you really want to start".
Checked few options like expect, read. But not able to come up with a solution.
Thanks in advance!
In general, the best way to handle this is by (a) using a standard process management system, such as your distro's preferred init system; or, if that's not possible, (b) to adjust the script to run noninteractively (e.g., with a --yes or --noninteractive option).
Barring that, assuming your script reads from standard input and not the TTY, you can use the standard program yes and pipe it into the command you're running, like so:
$ yes | ./deploy
yes prints y (or its argument) over and over until it's killed, usually by SIGPIPE.
If your process is reading from /dev/tty instead of standard input, and you really can't convince the other team to come to their senses and add an appropriate option, you'll need to use expect for this.

Is there a way to make a bash script process messages that have been sent to it using the write command

Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file

When running a shell script with loops, operation just stops

NOTICE: Feedback on how the question can be improved would be great as I am still learning, I understand there is no code because I am confident it does not need fixing. I have researched online a great deal and cannot seem to find the answer to my question. My script works as it should when I change the parameters to produce less outputs so I know it works just fine. I have debugged the script and got no errors. When my parameters are changed to produce more outputs and the script runs for hours then it stops. My goal for the question below is to determine if linux will timeout a process running over time (or something related) and, if, how it can be resolved.
I am running a shell script that has several for loops which does the following:
- Goes through existing files and copies data into a newly saved/named file
- Makes changes to the data in each file
- Submits these files (which number in the thousands) to another system
The script is very basic (beginner here) but so long as I don't give it too much to generate, it works as it should. However if I want it to loop through all possible cases which means I will generates 10's of thousands of files, then after a certain amount of time the shell script just stops running.
I have more than enough hard drive storage to support all the files being created. One thing to note however is that during the part where files are being submitted, if the machine they are submitted to is full at that moment in time, the shell script I'm running will have to pause where it is and wait for the other machine to clear. This process works for a certain amount of time but eventually the shell script stops running and won't continue.
Is there a way to make it continue or prevent it from stopping? I typed control + Z to suspend the script and then fg to resume but it still does nothing. I check the status by typing ls -la to see if the file size is increasing and it is not although top/ps says the script is still running.
Assuming that you are using 'Bash' for your script - most likely, you are running out of 'system resources' for your shell session. Also most likely, the manner in which your script works is causing the issue. Without seeing your script it will be difficult to provide additional guidance, however, you can check several items at the 'system level' that may assist you, i.e.
review system logs for errors about your process or about 'system resources'
check your docs: man ulimit (or 'man bash' and search for 'ulimit')
consider removing 'deep nesting' (if present); instead, create work sets where step one builds the 'data' needed for the next step, i.e. if possible, instead of:
step 1 (all files) ## guessing this is what you are doing
step 2 (all files)
step 3 (all files
Try each step for each file - Something like:
for MY_FILE in ${FILE_LIST}
do
step_1
step_2
step_3
done
:)
Dale

Take user input from the background

What I'm trying to accomplish is to have a process running in background from a Linux terminal which takes user input and does things according to that input even if the terminal window is not focused, so I can work with other GUI applications, and then when I push some pre-defined buttons, something might alter the program's state without loosing the focus of my current window. Just as simple as that (not that simple for me though).
I don't ask for an specific kind of implementation. I'm fine with anything that may work: C, C++, Java, Linux Bash script... The only requisite is that it works under Linux.
Thank you very much
Well you can have your server read a FIFO or a unix domain socket (or even a message queue). Then write a client that takes command line input and writes it to the pipe/queue from some other terminal session. With FIFOs you can just echo input from the command line itself to the pipe but FIFOs come with their own headaches. The "push the button and magic happens" is a lot trickier but maybe that was badly phrased?

Linux - communicating with a process? rejoin process in action?

I feel somewhat dumb asking this, but I'm relatively new to linux (more in terms of experience than time), and one thing that i've always wondered is if I can 'rejoin' (my own term) a process while it's running.
For example, if I set a game server or eggdrop IRC bot to run in the background, is there a command I can use to view that process in action and view all the output it delivers to the console?
I'm not talking about just viewing the process using the 'top' command, but actually linking to it as if I just ran it from the command line.
Thanks.
Debuggers can "attach" to running processes, but you might be better running your program in screen (which lets you detach and reattach to terminal in a fairly natural way).
There might be some good stuff good stuff in :
Redirect STDERR / STDOUT of a process AFTER it’s been started, using command line?
Can you be more specific? Are you just talking about backgrounding a process in the current session, then putting it back in the foreground.
E.g.:
doLongTask &
# Later
fg %3
3 in this example is the job number of this instance of doLongTask. You can see all running jobs with:
jobs
But note this will still only let you see what's being outputted to the console. I.E. stdout and stderr, minus any redirections.
The simple answer is:
>> ./runmyserver
<press ctrl-z>
>> bg
>> ...do something else ...
>> fg
You can also start in the background with:
>> ./runmyserver &
For more complicated stuff like disconnecting the server from your console session (so it's still running when you log out) you really want screen. Maybe beg them for it, it isn't really a security risk and it's a useful program to have around.
Also note that ctrl-z will actually pause your server until bg so if people are playing on it might skip a beat, best to do it quickly.
Finally, many game servers have a remote login for this kind of thing which would solve many of these issues. Make sure your game and host don't support this before looking for alternatives.
EDIT: Re-read your question. It sounds like you could at least get the output using redirect to a file. This won't let you add more input though:
./runmyserver > log.txt
If you know ahead of time that you want to do this, use screen(1) and run your server in the foreground in a screen session. You will be able to detach from your screen session and have the process keep running. You can then later re-attach your screen session and view any output it has made since, up to the size of the scrollback buffer.

Resources