Restricting pipes and redirects (Python3) - python-3.x

I have a program that takes standard input from the user and runs through the command line. Is there someway to make a program ignore pipes and redirects?
For example: python program.py < input.txt > output.txt would just act as if you put in python program.py

There is no simple way to find the terminal the user launched you with in the general case. There are some techniques you can use, but they will not always work.
You can use os.isatty() to detect whether a file (such as sys.stdin or sys.stdout) appears to be an interactive terminal session. It is possible you are hooked up to a terminal session other than the one the user used to launch your program, so this is not foolproof. Such a terminal session might even be under the control of a program rather than a human.
Under Unix, processes have a notion of a "controlling terminal." You may be able to talk to that via os.ctermid(). But the user can manipulate this value before launching your process. You also may not have a controlling terminal at all, e.g. if running as a daemon.
You can inspect the parent process and see if any of its file descriptors are hooked up to terminal sessions. Unfortunately, I'm not aware of any cross-platform way to do that. On Linux, I'd start with os.getppid() and the /proc filesystem (see proc(5)). If the parent process has exited (e.g. the user ran your_program.py & disown; exit under bash), this will not work. But in that case, there isn't much you can do anyway.

Related

Shell script: open multiple terminals and run commands?

I'm trying to create a very simple shell script that opens and runs five instances of my program. With batch, I would do something like this:
#echo off
python webstore.py 55530
python webstore.py 55531
python webstore.py 55532
exit
That would open three terminals and run the commands on each of them with different command line parameter. How would I create the same with a shell script that runs on every single unix-based platform? I've seen some commands for opening terminals, but they are platform specific (gnome-termial, xterm and so on).
How would I create the same with a shell script that runs on every single unix-based platform?
What you're asking for is a bit unreasonable. Think about it that way: on Windows, you're always inside its Desktop Window Manager, period, and the only choice you have is between Powershell and cmd.exe. But on Linux, it's a little more complicated:
Like you said, you can have either rxvt or xterm installed.
This is not the only issue though - you can use any window manager. While this does not matter much here,
You can be either using Xorg, or Wayland. Continuing on,
You can not use any graphical environment at all, e.g. run everything in Linux console! Which, unless you use fancy programs such as fbterm or tmux, is pretty much incapable of multitasking, yet alone spawning new windows.
That being said, you may even not use this computer physically at all, because you're connecting to it from SSH. No remote windows here either (unless you use stuff as X11 forwarding).
Finally, you can use zsh, bash, sh, fish etc. that all come with their own idiosyncrasies.
IMO your best bet is to test in your script which programs are installed, or script around a terminal multiplexer such as tmux and require it to be installed on the target machine. tmux in action:
(source: github.io)
(This will work in either SSH, Linux console, or any other scenario above.)
If you, however, do not care about the output of the commands you're about to run, you can just detach them from your current terminal session like this:
command1 &>/dev/null &
command2 &>/dev/null &
command3 &>/dev/null &
Be mindful that this:
Will run the commands in parallel.
Won't show you any output. If you remove &>/dev/null, the output from each command will interwine with each other, which is probably not what you want.
Closing the terminal usually kills its children processes, which, in this case, will kill your command instances that work in the background.
Issues mentioned above can be worked around, but I believe it is a little out of scope for this answer.
Personally I'd go either for tmux or for detaching solution depending on whether I need to see the console output.

How does Linux Expect script work?

I ever tried to input password by I/O redirection like echo <password> | ssh <user>#<host>, but it didn't work of course. Then I got that ssh actually reads password directly from /dev/tty instead of STDIN, so I/O redirection doesn't work for it.
As far as I know, Expect script is the standard way for this kind of job. I'm curious about how Expect works? I guess it runs the target program in a child process, and it changes the /dev/tty of the child process to refer to another place, but I don't know how.
It uses something called a pseudo-TTY which looks to the called program like a TTY, but allows for programmed control. See e.g. Don Libes' Exploring Expect p498f

Take user input from the background

What I'm trying to accomplish is to have a process running in background from a Linux terminal which takes user input and does things according to that input even if the terminal window is not focused, so I can work with other GUI applications, and then when I push some pre-defined buttons, something might alter the program's state without loosing the focus of my current window. Just as simple as that (not that simple for me though).
I don't ask for an specific kind of implementation. I'm fine with anything that may work: C, C++, Java, Linux Bash script... The only requisite is that it works under Linux.
Thank you very much
Well you can have your server read a FIFO or a unix domain socket (or even a message queue). Then write a client that takes command line input and writes it to the pipe/queue from some other terminal session. With FIFOs you can just echo input from the command line itself to the pipe but FIFOs come with their own headaches. The "push the button and magic happens" is a lot trickier but maybe that was badly phrased?

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

Resources