Piping Text into a bash script - linux

When my User logs in, I need to enter the following manually so I am trying to create a script to do it for me
. oraenv
The app asks me for input so I enter "M40" (same text every time)
Then I have to run a linux app to launch my work environment.
So how do I automatically enter M40 followed by an enter key

The oraenv script is prompting for a value for ORACLE_SID, so you can set that yourself in a .profile or elsewhere.
export ORACLE_SID=M40
It also has a flag you can set to make it non-interactive:
ORAENV_ASK=NO
Regarding piped input specifically, the script would have to be written to handle it, for example using read or commands such as cat without a filename. See Pipe input into a script for more details. However, this is not how the standard oraenv is coded (assuming that is the script you are using).

I am not sure if anyone of these operations helps you.
echo M40 | . oraenv
This one uses echo pipe.
printf M40 | . oraenv
This one uses printf for pipe. Using echo is different from using printf in some situations, however I don't know their actual difference.
. oraenv <<< M40
This one uses Here String (Sorry for using ABS as reference), a stripped-down form of Heredoc.
. oraenv < <(echo M40)
This one uses Process Substitution, you may see https://superuser.com/questions/1059781/what-exactly-is-in-bash-and-in-zsh for the difference between this one and the above one.
expect -c "spawn . oraenv; expect \"nput\"; send \"M40\r\n\"; interact"
This one uses expect to do automatic input, it has more extensibility in many situations. Note to change the expect \"nput\" part with your actual situation.

Related

Proxying commands across a SSH connection

I'd like to be able to "spoof" certain commands on my machine, actually invoking them on a remote system. For example. Whenever I run:
cmd options
I'd like the actual command to be:
ssh user#host cmd options
Ideally I'd like to have a folder called spoof, add it to my PATH, and have an executable in there called cmd which does the spoofing. If I have a lot of commands, this could get tedious. Anyone have ideas of a good way to go about this? Such that I can add and remove a lot of commands in the future? And, I'd like to be able to pass all the arguments exactly (or as exact as possible) and every single command I want to spoof would just have the ssh user#host in front of it.
The reason for this is I'm running a container (specifically singularity) on my machine, and there are certain commands I don't really want to containerize, but still want to run from within the container. I've found I can get the functionality I want by just appending the ssh in front of it. Examples are sbatch and matlab which are a pain to containerize and I'm fine with just using ssh to call them. Files that these programs use are written to a bind point so the host machine can see them just fine.
The following script can be hardlinked under all the names of commands you wish to transparently proxy:
#!/usr/bin/env bash
printf -v str '%q ' "${0##*/}" "$#"
ssh host "$str"
SSH combines all its arguments into a single string, which is then executed by a remote shell. To ensure that the remote arguments are identical to the local one, the values need to be escaped; otherwise, somecommand "hello world" and somecommand "hello" "world" can be represented identically over-the-wire.
In an appropriately extended printf (including both bash and ksh implementations), %q is replaced with an escaped form of the corresponding value, which will be evaled back to the original (literal) text by if interpreted later.
printf -v varname stores the output of printf in a variable named varname without the overhead/inefficiency of a command substitutions. (In ksh93, varname=$(printf ...) is optimized to skip subshell overhead, so this is not necessary there).
$0 evaluates to argv[0], which is by convention the name of the command currently being run. (This can be overridden, but you trust your users to behave reasonably... right?)
${0##*/} is a parameter expansion which returns only content after the last / in $0 (should it in fact contain any slashes; otherwise, the original value is used unmodified).
"$#" refers to the exact argument vector passed to your script.

Simple commands not found bash (#!/usr/bin/expect)

I've recently started using bash to automate a windows rescue disk with chntpw. I'm trying to set up the program to use the expect command to listen for certain chntpw dialog questions and input the right answers without any user input. For some reason after setting up the bash script to use #!/usr/bin/expect rather than #!/bin/bash then many standard terminal commands are no longer understood.
I'm running the script by typing this into terminal:
user#kali:~/Desktop/projects/breezee$ bash breezee1.sh
The terminal output is as follows:
BREEZEE 1.0
Welcome to BREEZEE
breezee1.sh: line 9: fdisk: command not found
[Select] /dev/:
Here is my code:
#!/usr/bin/expect
clear
echo "BREEZEE 1.0"
echo "Welcome to BREEZEE"
fdisk -l
#list partitions
echo -n "[Select] /dev/:"
#ask user to choose primary windows partition
read sda
clear
echo /dev/$sda selected
umount /dev/$sda
sudo ntfsfix /dev/$sda
sudo mount -t ntfs-3g -o remove_hiberfile /dev/$sda /mnt/
cd /mnt/Windows/System32/config
clear
chntpw -l SAM #list accounts on windows partition
chntpw -u Administrator SAM
#now supply chntpw with values to run the password clear (this answers the prompts)
expect '> '
send '1\r'
expect '> '
send '2\r'
expect '> '
send '3\r'
expect ': '
send 'y\r'
expect '> '
send 'q\r'
expect ': '
send 'y\r'
clear
echo "Operation Successfull!"
chntpw -l SAM #list accounts on windows partition
In short, I'm trying to use standard bash/terminal commands alongside the expect commands. I'm probably going about this all wrong, so please correct me as I've been troubleshooting this for about three days and haven't gotten far :(
When you specify the application that should run your script, you can only use the scripting language that application will understand.
Clearly, Expect is not bash, and does not understand bash commands.
i suggest you separate those two scripts. Write the first part for !#/bin/bash, the second for Expect. Make the first script invoke the second script and redirect it to chntpw.
expect uses tcl not bash. So you can write your script in TCL when you use #!/usr/bin/expect.
For example, echo "BREEZEE 1.0" should be written as:
puts "BREEZEE 1.0"
And you should use exp_send instead of send.
From expect manual:
exp_send is an alias for send. If you are using Expectk or some other variant of Expect in the Tk environment, send is defined by Tk for an entirely different purpose. exp_send is provided for compatibility between environments. Similar aliases are provided for other Expect's other send commands.

predefined input in a nohup shell script [duplicate]

I have a script that calls an application that requires user input, e.g. run app that requires user to type in 'Y' or 'N'.
How can I get the shell script not to ask the user for the input but rather use the value from a predefined variable in the script?
In my case there will be two questions that require input.
You can pipe in whatever text you'd like on stdin and it will be just the same as having the user type it themselves. For example to simulating typing "Y" just use:
echo "Y" | myapp
or using a shell variable:
echo $ANSWER | myapp
There is also a unix command called "yes" that outputs a continuous stream of "y" for apps that ask lots of questions that you just want to answer in the affirmative.
If the app reads from stdin (as opposed to from /dev/tty, as e.g. the passwd program does), then multiline input is the perfect candidate for a here-document.
#!/bin/sh
the_app [app options here] <<EOF
Yes
No
Maybe
Do it with $SHELL
Quit
EOF
As you can see, here-documents even allow parameter substitution. If you don't want this, use <<'EOF'.
the expect command for more complicated situations, you system should have it. Haven't used it much myself, but I suspect its what you're looking for.
$ man expect
http://oreilly.com/catalog/expect/chapter/ch03.html
I prefer this way: If You want multiple inputs... you put in multiple echo statements as so:
{ echo Y; Y; } | sh install.sh >> install.out
In the example above... I am feeding two inputs into the install.sh script. Then... at the end, I am piping the script output to a log file to be archived and viewed for later.

Bash: Loop through file and read substring as argument, execute multiple instances

How it is now
I currently have a script running under windows that frequently invokes recursive file trees from a list of servers.
I use an AutoIt (job manager) script to execute 30 parallel instances of lftp (still windows), doing this:
lftp -e "find .; exit" <serveraddr>
The file used as input for the job manager is a plain text file and each line is formatted like this:
<serveraddr>|...
where "..." is unimportant data. I need to run multiple instances of lftp in order to achieve maximum performance, because single instance performance is determined by the response time of the server.
Each lftp.exe instance pipes its output to a file named
<serveraddr>.txt
How it needs to be
Now I need to port this whole thing over to a linux (Ubuntu, with lftp installed) dedicated server. From my previous, very(!) limited experience with linux, I guess this will be quite simple.
What do I need to write and with what? For example, do I still need a job man script or can this be done in a single script? How do I read from the file (I guess this will be the easy part), and how do I keep a max. amount of 30 instances running (maybe even with a timeout, because extremely unresponsive servers can clog the queue)?
Thanks!
Parallel processing
I'd use GNU/parallel. It isn't distributed by default, but can be installed for most Linux distributions from default package repositories. It works like this:
parallel echo ::: arg1 arg2
will execute echo arg1 and and echo arg2 in parallel.
So the most easy approach is to create a script that synchronizes your server in bash/perl/python - whatever suits your fancy - and execute it like this:
parallel ./script ::: server1 server2
The script could look like this:
#!/bin/sh
#$0 holds program name, $1 holds first argument.
#$1 will get passed from GNU/parallel. we save it to a variable.
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
lftp seems to be available for Linux as well, so you don't need to change the FTP client.
To run max. 30 instances at a time, pass a -j30 like this: parallel -j30 echo ::: 1 2 3
Reading the file list
Now how do you transform specification file containing <server>|... entries to GNU/parallel arguments? Easy - first, filter the file to contain just host names:
sed 's/|.*$//' server-list.txt
sed is used to replace things using regular expressions, and more. This will strip everything (.*) after the first | up to the line end ($). (While | normally means alternative operator in regular expressions, in sed, it needs to be escaped to work like that, otherwise it means just plain |.)
So now you have list of servers. How to pass them to your script? With xargs! xargs will put each line as if it was an additional argument to your executable. For example
echo -e "1\n2"|xargs echo fixed_argument
will run
echo fixed_argument 1 2
So in your case you should do
sed 's/|.*$//' server-list.txt | xargs parallel -j30 ./script :::
Caveats
Be sure not to save the results to the same file in each parallel task, otherwise the file will get corrupt - coreutils are simple and don't implement any locking mechanisms unless you implement them yourself. That's why I redirected the output to $server-files.txt rather than files.txt.

Customize tab completion in shell

This may be have a better name than "custom tab completion", but here's the scenario:
Typically when I'm at the command line and I enter a command, followed with {TAB} twice, I get a list of all files and subdirectories in the current directory. For example:
[user#host tmp]$ cat <TAB><TAB>
chromatron2.exe Fedora-16-i686-Live-Desktop.iso isolate.py
favicon.ico foo.exe James_Gosling_Interview.mp3
However, I noticed at least one program somehow filters this list: wine. Consider:
[user#host tmp]$ wine <TAB><TAB>
chromatron2.exe foo.exe
It effectively filters the results to *.exe.
Thinking it might be some sort of wrapper script responsible for the filtering, a did a which and file an it turns out wine is not a script but an executable.
Now, I don't know whether this "filter" is somehow encoded in the program itself, or otherwise specified during the default wine install, so I'm not sure whether this question is more appropriate for stackoverflow or superuser, so I'm crossing my fingers and throwing it here. I apologize if I guessed wrong. (Also, I checked a few similar questions, but most were irrelevant or involved editing the shell configuration.)
So my question is, how is this "filtering" accomplished? Thanks in advance.
You will likely find a file on your system called /etc/bash_completion which is full of functions and complete commands that set up this behavior. The file will be sourced by one of your shell startup files such as ~/.bashrc.
There may also be a directory called /etc/bash_completion.d which contains individual files with more completion functions. These files are sourced by /etc/bash_completion.
This is what the wine completion command looks like from the /etc/bash_completion on my system:
complete -f -X '!*.#(exe|EXE|com|COM|scr|SCR|exe.so)' wine
This set of files is in large part maintained by the Bash Completion Project.
You can take a look at Programmable Completion in bash manual to understand how it works.
I know this is old but I was looking to do something similar with a script of my own.
You can play around with an example I made here:
http://runnable.com/Uug-FAUPXc4hAADF/autocomplete-for-bash
Pasted code from above:
# Create function that will run when a certain phrase is typed in terminal
# and tab key is pressed twice
_math_complete()
{
# fill local variable with a list of completions
local COMPLETES="add sub mult div"
# you can fill this variable however you want. example:
# ./genMathArgs.sh > ./mathArgsList
# local COMPLETES=`cat ./mathArgsList`
# we put the completions into $COMPREPLY using compgen
COMPREPLY=( $(compgen -W "$COMPLETES" -- ${COMP_WORDS[COMP_CWORD]}) )
return 0
}
# get completions for command 'math' from function '_math_complete()'
complete -F _math_complete math
# print instructions
echo ""
echo "To test auto complete do the following:"
echo "Type math then press tab twice."
echo "You will see the list we created in COMPLETES"
echo ""

Resources