Launching a Python script in a separate console window...from a python script - python-3.x

I have a python command-line tool that allows the user to select a variety of options, each a module.
One of the options is a standalone python script that doesn't share any I/O or state with the main program, but it runs continuously and would be blocking. I'd really like to launch it in a separate console window, where the user will be prompted for input and it will run until they manually exit.
I've tried several subprocess options thus far, but the farthest I've gotten is launching a new window that just...hangs.
Of course, I'd like to be as OS-agnostic as possible. I'm guessing the type of terminal emulator matters here, though, among other things. Should I looking at the multiprocessing module?
I welcome any advice that would help me get on the right track or point out any obvious (or perhaps not-so-obvious) flaws in my perspective. I'd like to adhere to the best-practice for this situation but am just not experienced enough. Thanks.
Edit: I got this to work by calling the actual submodule:
os.system("gnome-terminal -e 'bash -c \"python3 -m name.of.module; exec bash\"'")
This works splendidly, but I get all this ugly output from Gnome inside of the main program that launched the second process:
# Option “-e” is deprecated and might be removed in a later version of gnome-terminal.
# Use “-- ” to terminate the options and put the command line to execute after it.
# _g_io_module_get_default: Found default implementation local (GLocalVfs) for ‘gio-vfs’
# posix_spawn avoided (fd close requested)
# _g_io_module_get_default: Found default implementation dconf (DConfSettingsBackend) for ‘gsettings-backend’
# watch_fast: "/org/gnome/terminal/legacy/" (establishing: 0, active: 0)
# unwatch_fast: "/org/gnome/terminal/legacy/" (active: 0, establishing: 1)
# watch_established: "/org/gnome/terminal/legacy/" (establishing: 0)
Using -- in lieu of -e causes a child process error. I've also tested other subprocess calls with the -- option and I still get some ugly output from Gnome. I can pipe stderr to /dev/null but I don't feel like this very clean.
Is this generally a sensible solution, or is this bad design (on my part, that is)?

Thus far, I've gotten this to work on both Linux and Mac. It's ugly, though; I'd welcome a better answer, but haven't found one.
def open_new_window(module_name):
if (operating_sys == "linux" or operating_sys == "linux2"):
os.system(f"gnome-terminal -e 2>/dev/null 'bash -c \"python3 -m app.{module_name}; exec bash\"'")
if (operating_sys == "darwin"):
os.system(f"""osascript -e 'tell app "Terminal"
do script "cd {root_path}; python3 -m app.{module_name}"
end tell' """)
else:
print("[-] Your operating system does not support this utility.\n")
It works. Passing stdout to /dev/null on the linux option will get rid of the messy Gnome output. This method allows me to dynamically call modules that need to run in their own console window, detached from the main program. For running scripts that don't need to share state, it works just fine and manages to be somewhat cross-platform.
Still, I feel like there must be a more Pythonic way to do this.
Note: I've noticed on many related posts, the statement "in a new window" causes some confusion. "In a new window" here means the app literally opens a new shell window (unfortunately, which shell will need to be hardcoded for you) and starts the module as a standalone process (the main app does not keep track of it). For others using this solution, keep this in mind - this is certainly NOT a good way to do this if you need to manage I/O from main instance.

Related

How to execute a shell program taking inputs with python?

First of all, I'm using Ubuntu 20.04 and Python 3.8.
I would like to run a program that takes command line inputs. I managed to start the program from python with the os.system() command, but after starting the program it is impossible to send the inputs. The program in question is a product interface application that uses the CubeSat Space Protocol (CSP) as a language. However, the inputs used are encoded in a .c file with their corresponding .h header.
In the shell, it looks like this:
starting the program
In python, it looks like this:
import os
os.chdir('/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1')
os.system('./waf')
os.system('./build/csp-client -k/dev/ttyUSB1')
os.system('cmp ident') #cmp ident is typically the kind of command that does not work on python
The output is the same as in the shell but without the "cmp ident output", that is to say it's impossible for me to use the csp-client#
As you can probably see, I'm a real beginner trying to be as clear and precise as possible. I can of course try to give more information if needed. Thanks for your help !
It sounds like the pexpect module might be what you're looking for rather than using os.system it's designed for controlling other applications and interacting with them like a human is using them. The documentation for it is available here. But what you want will probably look something like this:
import pexpect
p = pexpect.spawnu("/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1")
p.expect("csp-client")
p.sendline("cmp indent")
print(p.read())
p.close()
I'll try and give you some hints to get you started - though bear in mind I do not know any of your tools, i.e. waf or csp-client, but hopefully that will not matter.
I'll number my points so you can refer to the steps easily.
Point 1
If waf is a build system, I wouldn't keep running that every time you want to run your csp-client. Just use waf to rebuild when you have changed your code - that should save time.
Point 2
When you change directory to /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1 and then run ./build/csp-client you are effectively running:
/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1
But that is rather annoying, so I would make a symbolic link to that that from /usr/local/bin so that you can run it just with:
csp-client -k/dev/ttyUSB1
So, I would make that symlink with:
ln -s /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client /usr/local/bin/csp-client
You MAY need to put sudo at the start of that command. Once you have that, you should be able to just run:
csp-client -k/dev/ttyUSB1
Point 3
Your Python code doesn't work because every os.system() starts a completely new shell, unrelated to the previous line or shell. And the shell that it starts then exits before your next os.system() command.
As a result, the cmp ident command never goes to the csp-client. You really need to send the cmp ident command on the stdin or "standard input" of csp-client. You can do that in Python, it is described here, but it's not all that easy for a beginner.
Instead of that, if you just have aa few limited commands you need to send, such as "take a picture", I would make and test complete bash scripts in the Terminal, till I got them right and then just call those from Python. So, I would make a bash script in your HOME directory called, say csp-snap and put something like this in it:
#/bin/bash
# Extend PATH so we can find "/usr/local/bin/csp-client"
PATH=$PATH:/usr/local/bin
{
# Tell client to take picture
echo "nanoncam snap"
# Exit csp-client
echo exit
} | csp-client -k/dev/ttyUSB1
Now make that executable (only necessary once) with:
chmod +x $HOME/csp-snap
And then you can test it with:
$HOME/csp-snap
If that works, you can copy the script to /usr/local/bin with:
cp $HOME/csp-snap /usr/local/bin
You may need sudo at the start again.
Then you should be able to take photos from anywhere just with:
csp-snap
Then your Python code becomes easy:
os.system('/usr/local/bin/csp-snap')

Linux equivalent of OS X's /usr/bin/open

I'm trying to launch a process such the same way that I do in OS X with /usr/bin/open like this: open -a /Applications/Firefox.app --args -profile "blah blah" -no-remote.
As I learned from this topic here: launchd from terminal to start app.
However Linux doesn't have this open as I thought it did. I verified this with my searching. But in my searching I couldn't find an alternative. How can I launch process so that the launching process doesn't share its file descriptors with the launched process as explained in this SO topic: Close all File Handles when Calling posix_spawn
This is a video showing my desktop files. I'm trying to launch them somehow so that the file descriptors don't mix between each other here is my screen cast: https://www.youtube.com/watch?v=Yc19BzLTnDE
This video shows the PIDs are mixing: https://www.youtube.com/watch?v=YJsyV6tK7FA
Use xgd-open.
xdg-open is a desktop-independent tool for configuring the default applications of a user.
You can launch X11 applications in Linux simply by running the binary, so the open command is unnecessary for this use. (Another use of open would be to launch documents with the associated application, for which you can use either a desktop-manager-specific command or xdg-open.)
To avoid sharing file descriptors you can simply close them from the shell, e.g., in bash /usr/bin/x11/firefox 3>&- 4>&- … (up to 9) or if it's just the standard ones then perhaps you can redirect them: </dev/null >/dev/null 2>/dev/null. Or maybe you just want to use nohup to avoid closing the program on SIGHUP when the terminal closes.
Solution found that launches the .desktop file with the custom icon it used. I couldn't get xdg-open to work on i, no clue why.
https://askubuntu.com/questions/591736/any-c-functions-to-simulate-double-click-on-file/592439

How to log xterm window from tcl script

I am opening a xterm window from my tcl by exec xterm -geometry 78x36+0+0 -fn "-adobe-courier-medium-r-*-*-*-120-*-*-*-*-iso8859-1" -sl 10000 -sb -bg white -bd white -into..... I am executing other commands on this emulate terminal. Now i want to log output of those commands into a file from the same tcl script.
Can any one have idea about how to do it.... ?
Thanks in advance
murali krishna
Capturing from outside — from the perspective of the script doing exec xterm … -into … — is extremely hard, as there are no events you get when something draws on the subsidiary window (except in one case where you actually don't want them) and you'd end up just seeing a lot of bitmaps of what happened anyway; large and really uninformative. You need to use a different approach; you need to capture from the inside, to log the things that the user sees on the terminal. Fortunately, this isn't actually too hard to do.
To keep a complete log of what happens inside a terminal (where the terminal program itself doesn't offer the feature) your best bet is to run a little Expect script inside the terminal.
package require Expect
log_file /tmp/somefile.log
spawn $env(SHELL)
interact
exit
Run this inside the terminal (there's an option to xterm to do this) and it will record everything that happens inside. It's logged to a temporary file, /tmp/somefile.log, but you can change what name to use if you desire. It's probably a good idea to pass the log file in by an argument:
package require Expect
if {$argc < 1} {
error "not enough arguments"
}
# Unlike C, Tcl doesn't include interpreter name or script name in argv
log_file [lindex $argv 0]
spawn $env(SHELL)
interact
exit

execute a gui application from the command line and send it to the background

Is there a command line utility that I can use for executing X based applications that will detach my applications from the terminal so they aren't closed if the terminal is closed?
I guess such a app could be called something like gnome-run if it existed.
I have tried dtach, but it seems that you have to provide a socket file which is a bit clunky to type. I have also tried nohup, but found that also to be a bit clunky to type by the time std out and err are redirected to /dev/null.
I guess I'm looking for a simple program that will do something similar to this:
#!/bin/bash
nohup $1 > /dev/null 2>&1 &
Yes, there is a way to do it: first you need to run your GUI app and send it to background, then you (probably) want to detach it from Bash task management. For example if I wanted to run gedit this way:
gedit &
disown %1
After that you can close your terminal window and gedit will not be killed. Enjoy!
You already wrote your program, it is called a shell script and you give it the name you like and put it somewhere. Then you either add that directory to your $PATH or in your bashrc you set:
alias gnome-run=<path>/my-awesome-script.sh
Why waste earth's resources on a program?
If you want to run an application (say, gedit) as if it was run from the GUI, use:
xdg-open /usr/share/applications/gedit.desktop
See this answer on superuser.

Webapp update shell script

I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.

Resources