bash script to show compatible commands based on "Windows-speak" - linux

Problem: Customer X is a Windows user who wants to be able to trigger pre-packaged bash commands by using mnemonic keywords or "tag hints" when she is logged in to her RedHat box via shell.
Example: Customer X logs into host using ssh and wants to do some routine file operations. She wants to be able to type
copy file
and get back a listing of pre-fab fill-in-the-blank bash commands to choose from
cp <#source#> <#dest#> ### simple copy
cp -R <#startdir#> <#destdir#> ### recursive copy
she then wants to be able to select one of these items, fill in the blank(s) and just hit enter to run the command.
Customer X is willing to specify ahead of time what commands she is likely to want to use (in windows-speak) and then hire the developer to translate those into bash commands, and then put them together in a script that allows her to talk windows-speak to bash and get back the list of commands.
NOTE: Customer X doesn't like apropos because it assumes familiarity with terms used in bash, as opposed to windows-speak. For example:
apropos shortcut
doesn't give her anything about creating symlinks (even though that is exactly what she wants) because she doesn't know what windows shortcuts are called in linux. Obviously, windows concepts don't carry over 100% so she will have to learn eventually, but she's a busy person and is asking for this as a way to "ease" her into linux understanding.
Question: What is the best way to get started on something like this? Is there a perl, python, ruby script out there that does something like this already? Is there something in bash that can simulate this kind of feature request?

What you probably want is to override bash's command-not-found handler. Here's the section in /etc/bash.bashrc in a standard Ubuntu install that installs the handler:
...
# if the command-not-found package is installed, use it
if [ -x /usr/lib/command-not-found ]; then
function command_not_found_handle {
# check because c-n-f could've been removed in the meantime
if [ -x /usr/lib/command-not-found ]; then
/usr/bin/python /usr/lib/command-not-found -- $1
return $?
else
return 127
fi
}
fi
...
In effect, if a command is not found, a user specified program is executed with that command as a parameter. In the case of Ubuntu, it's a Python program that checks to see if the command the user typed is a valid application that can be installed, and if it is, informs the user that he/she can install it.
What you probably want to do is compare it to you hashref of commands and usage strings and display the appropriate one if there's a match.

Related

How to execute a shell program taking inputs with python?

First of all, I'm using Ubuntu 20.04 and Python 3.8.
I would like to run a program that takes command line inputs. I managed to start the program from python with the os.system() command, but after starting the program it is impossible to send the inputs. The program in question is a product interface application that uses the CubeSat Space Protocol (CSP) as a language. However, the inputs used are encoded in a .c file with their corresponding .h header.
In the shell, it looks like this:
starting the program
In python, it looks like this:
import os
os.chdir('/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1')
os.system('./waf')
os.system('./build/csp-client -k/dev/ttyUSB1')
os.system('cmp ident') #cmp ident is typically the kind of command that does not work on python
The output is the same as in the shell but without the "cmp ident output", that is to say it's impossible for me to use the csp-client#
As you can probably see, I'm a real beginner trying to be as clear and precise as possible. I can of course try to give more information if needed. Thanks for your help !
It sounds like the pexpect module might be what you're looking for rather than using os.system it's designed for controlling other applications and interacting with them like a human is using them. The documentation for it is available here. But what you want will probably look something like this:
import pexpect
p = pexpect.spawnu("/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1")
p.expect("csp-client")
p.sendline("cmp indent")
print(p.read())
p.close()
I'll try and give you some hints to get you started - though bear in mind I do not know any of your tools, i.e. waf or csp-client, but hopefully that will not matter.
I'll number my points so you can refer to the steps easily.
Point 1
If waf is a build system, I wouldn't keep running that every time you want to run your csp-client. Just use waf to rebuild when you have changed your code - that should save time.
Point 2
When you change directory to /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1 and then run ./build/csp-client you are effectively running:
/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1
But that is rather annoying, so I would make a symbolic link to that that from /usr/local/bin so that you can run it just with:
csp-client -k/dev/ttyUSB1
So, I would make that symlink with:
ln -s /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client /usr/local/bin/csp-client
You MAY need to put sudo at the start of that command. Once you have that, you should be able to just run:
csp-client -k/dev/ttyUSB1
Point 3
Your Python code doesn't work because every os.system() starts a completely new shell, unrelated to the previous line or shell. And the shell that it starts then exits before your next os.system() command.
As a result, the cmp ident command never goes to the csp-client. You really need to send the cmp ident command on the stdin or "standard input" of csp-client. You can do that in Python, it is described here, but it's not all that easy for a beginner.
Instead of that, if you just have aa few limited commands you need to send, such as "take a picture", I would make and test complete bash scripts in the Terminal, till I got them right and then just call those from Python. So, I would make a bash script in your HOME directory called, say csp-snap and put something like this in it:
#/bin/bash
# Extend PATH so we can find "/usr/local/bin/csp-client"
PATH=$PATH:/usr/local/bin
{
# Tell client to take picture
echo "nanoncam snap"
# Exit csp-client
echo exit
} | csp-client -k/dev/ttyUSB1
Now make that executable (only necessary once) with:
chmod +x $HOME/csp-snap
And then you can test it with:
$HOME/csp-snap
If that works, you can copy the script to /usr/local/bin with:
cp $HOME/csp-snap /usr/local/bin
You may need sudo at the start again.
Then you should be able to take photos from anywhere just with:
csp-snap
Then your Python code becomes easy:
os.system('/usr/local/bin/csp-snap')

I want to get a tip of rm command filter by using bash script

Some weeks ago, a senior team member removed an important oracle database file(.dbf) unexpectedly. Fortunately, We could restore the system by using back-up files which was saved some days ago.
After seeing that situation, I decided to implement a solution to make atleast a double confirmation when typing rm command on the prompt. (checks more than rm -i)
Even though we aliasing rm -i as default, super speedy keyboardists usually make mistakes like that member, including me.
At first, I replaced(by using alias) basic rm command to a specific bash script file which prints and confirms many times if the targets are related on the oracle database paths or files.
simply speaking, the script operates as filter before to operate rm. If it is not related with oracle, then rm will operate as normal.
While implementing, I thought most of features are well operated as I expected only user prompt environment except one concern.
If rm command are called within other scripts(provided oracle, other vendor modifying oracle path, installer, etc) or programs(by using system call).
How can i distinguish that situation?
If above provided scripts met modified rm, That execution doesn't go ahead anymore.
Do you have more sophisticated methods?
I believe most of reader can understand my lazy explanation.
If you couldn't get clear scenery from above, let me know. I will elaborate more.
We read at man bash:
Aliases are not expanded when the shell is not interactive, unless the
expand_aliases shell option is set using shopt.
Then if you use alias to make rm invoke your shell script, other scripts won't use it by default. If it's what you want, then you're already safe.
The problem is if you want your version of rm to be invoked by scripts and do something smart when it happens. Alias is not enough for the former; even putting your rm somewhere under $PATH is not enough for programs explicitly calling /bin/rm. And for programs that aren't shell scripts, unlink system call is much more likely to be used than something like system("rm ...").
I think that for the whole "safe rm" thing to be useful, it should avoid prompts even when invoked interactively. Every user will develop the habit of saying "yes" to it, and there is no known way around that. What might work is something that moves files to recycle bin instead of deletion, making damage easy to undo (as I seem to recall, there were ready to use solutions for this).
The answer is into the alias manpage:
Note aliases are not expanded by default in non-interactive
shell, and it can be enabled by setting the expand_aliases shell
option using shopt.
Check it by yourself with man alias ;)
Anyway, i would do it in the same way you've chosen
To distinguish the situation: You can create an env variable say, APPL, which will be set to say export APPL="DATABASE . In your customized rm script, perform the double checkings only if the APPL is DATABASE (which indicates a database related script), not otherwise which means the rm call is from other scripts.
If you're using bash, you can export your shell function, which will make it available in scripts, too.
#!/usr/bin/env bash
# Define a replacement for `rm` and export it.
rm() { echo "PSYCH."; }; export -f rm
Shell functions take precedence over builtins and external utilities, so by using just rm even scripts will invoke the function - unless they explicitly bypass the function by invoking /bin/rm ... or command rm ....
Place the above (with your actual implementation of rm()) either in each user's ~/.bashrc file, or in the system-wide bash profile - sadly, its location is not standardized (e.g.: Ubuntu: /etc/bash.bashrc; Fedora /etc/bashrc)

Customize tab completion in shell

This may be have a better name than "custom tab completion", but here's the scenario:
Typically when I'm at the command line and I enter a command, followed with {TAB} twice, I get a list of all files and subdirectories in the current directory. For example:
[user#host tmp]$ cat <TAB><TAB>
chromatron2.exe Fedora-16-i686-Live-Desktop.iso isolate.py
favicon.ico foo.exe James_Gosling_Interview.mp3
However, I noticed at least one program somehow filters this list: wine. Consider:
[user#host tmp]$ wine <TAB><TAB>
chromatron2.exe foo.exe
It effectively filters the results to *.exe.
Thinking it might be some sort of wrapper script responsible for the filtering, a did a which and file an it turns out wine is not a script but an executable.
Now, I don't know whether this "filter" is somehow encoded in the program itself, or otherwise specified during the default wine install, so I'm not sure whether this question is more appropriate for stackoverflow or superuser, so I'm crossing my fingers and throwing it here. I apologize if I guessed wrong. (Also, I checked a few similar questions, but most were irrelevant or involved editing the shell configuration.)
So my question is, how is this "filtering" accomplished? Thanks in advance.
You will likely find a file on your system called /etc/bash_completion which is full of functions and complete commands that set up this behavior. The file will be sourced by one of your shell startup files such as ~/.bashrc.
There may also be a directory called /etc/bash_completion.d which contains individual files with more completion functions. These files are sourced by /etc/bash_completion.
This is what the wine completion command looks like from the /etc/bash_completion on my system:
complete -f -X '!*.#(exe|EXE|com|COM|scr|SCR|exe.so)' wine
This set of files is in large part maintained by the Bash Completion Project.
You can take a look at Programmable Completion in bash manual to understand how it works.
I know this is old but I was looking to do something similar with a script of my own.
You can play around with an example I made here:
http://runnable.com/Uug-FAUPXc4hAADF/autocomplete-for-bash
Pasted code from above:
# Create function that will run when a certain phrase is typed in terminal
# and tab key is pressed twice
_math_complete()
{
# fill local variable with a list of completions
local COMPLETES="add sub mult div"
# you can fill this variable however you want. example:
# ./genMathArgs.sh > ./mathArgsList
# local COMPLETES=`cat ./mathArgsList`
# we put the completions into $COMPREPLY using compgen
COMPREPLY=( $(compgen -W "$COMPLETES" -- ${COMP_WORDS[COMP_CWORD]}) )
return 0
}
# get completions for command 'math' from function '_math_complete()'
complete -F _math_complete math
# print instructions
echo ""
echo "To test auto complete do the following:"
echo "Type math then press tab twice."
echo "You will see the list we created in COMPLETES"
echo ""

Automatically invoking gksudo like UAC

This is about me being stressed by playing the game "type a command and remember to prepend sudo or your fingers will get slapped".
I am wondering if it is possible somehow to configure my Linux system or shell such that when I forget to type e.g. "sudo apt-get install emacs", instead of just telling me that I did something wrong, gksudo would get launched, allowing me to acknowledge my credentials and get on moving. Just like UAC does on windows.
Googling hasn't helped me yet..
So is this possible? Did I miss something? Or am I asking for a square circle?
Edit 2010 July 25th: Thanks everyone for your interrest. Unfortunately, Daenyth and bmargulies answers and explanations are what I anticipated/feared since it was impossible for me to google-up a solution prior to submitting this question. I hope that some nice person will someday provide an effective solution for this.
BR,
Christian
Linux doesn't allow for this. Unlike Windows, where any program can launch a dialog box, and UAC is in the kernel, Linux programs aren't necessarily GUI-capable, and sudo is not, in this sense, in the kernel. A program cannot make a call to elevate privilege (unless it was launched with privilege to begin with and intentionally setuid'd down). sudo is a separate executable with setuid privilege, which checks for permission. If it likes what it sees, it forks the shell to execute the command line. This can't be turned inside out.
As suggested in other posts, you may be able to come up with some 'shell game' to arrange to run sudo for you for some enumerated list of commands, but that's all you are going to get.
You can do what you want with a preexec hook function, similar to the command-not-found package.
There's no way to do this given the current linux software stack. Additionally, MS has a patent on this behavior -- present a user interface identifying an account having a right to permit a task in response to the task being prohibited based on a user's current account not having that right.
I don't think this really works in a general way (automatically deciding which application needs admin rights). However you could make aliases like this for every application:
alias alias apt-get='gksudo apt-get'
If you now enter apt-get install firefox the gnome asks for the admin password. You can store the commands in ~./bashrc
You could use a shell script like the following:
#!/bin/bash
$#
if [ $? -ne 0 ]; then
sudo $# # or "gksudo $#"
fi
This will run a command given in the arguments with a sudo prefix if the command came back with a non-zero return code (i.e. if it failed).
Use it as in "SCRIPT_NAME apt-get install emacs" for example. You may save it somewhere in your $PATH and set it as an alias like this (if you saved it as do_sudo):
alias apt-get='do_sudo apt-get'
Edit: That does not work for programs like synaptic which do work for non-root users but will give them less privileges. However, if the application fails when invoked without root privileges (like apt-get does) this works fine.
In the case where you want to always run a command as root but might already be root, you can solve this by wrapping a little bash script around it:
#!/bin/bash
if [ $EUID = 0 ]; then
"$#"
else
gksudo "$#"
fi
If you call this something like alwaysroot.bash and place it in the right spot on your PATH, then you can call your other program like this:
alwaysroot.bash otherprogram -arguments...
It even handles arguments with spaces in correctly.

Webapp update shell script

I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.

Resources