Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I want to execute a Linux command in Bash with the input coming from a file or echo, and then switch back to standard input as if the command's input was not redirected.
So basically, I want to feed the first part from an interactive command with a predefined text, and as soon as that text is "consumed" I want to continue using the keyboard (stdin).
Some examples:
Prefill editor and then continue typing manually
(echo blablabla; cat) | nano
Auto remove first file, then manually confirm removing second file
touch dummyfile1.txt; touch dummyfile2.txt
(echo y; cat) | rm -i dummyfile*.txt
Fill in password for dummyuser, and than let the user fill in the password for the zip file
(echo dummypassword; cat) | su dummyuser -c "unzip pwdprotectedfile.zip"
Here, the echo fills in the first part of the command, and then cat takes over to copy stdin to stdout to manually fill in the remaining part of whatever the command needs.
The (echo ; cat) method is the closest thing that (almost) works. But the problem here is that at the end an extra enter key press is needed to return to the command prompt.
How to do this properly without the extra key press needed?
The real situation I need this for is to run su -c somecommand, fill in the password automatically (from a secure source) and then let the user answers the questions asked by somecommand.
If your intent is to programmatically generate a content for the nano editor, it is as simple as telling nano to edit the standard input by specifying - as file name.
echo "blablabla" | nano -
Now consider that echo behaviour is not portable across shell versions, so prefer it printf '%s\n' "blablabla" for a single line of text.
To be courteous with the user, you can invoke his preferred editor as set in the EDITOR environment variable, with fall-back to vi if the EDITOR environment variable is not set.
printf '%s\n' "blablabla" | "${EDITOR:-vi}"
If you are using bash, you can replace piping printf with an here-string instead:
#!/usr/bin/env bash
"${EDITOR:-vi}" - <<<"blablabla"
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Manual steps.
I run command that lists data about my applications. There are over 1200 commands/jobs.
One of the lines has a location where logs can be found. I want to run a "more" on this file location.
Is this possible with Unix scripting using one function or with one function calling another function?
Yes!
There's quite a few ways to combine operations! There's pipes, that let you send the output of one command to another command. There's commands like grep (search), sed (for find/replace) and awk (computation, and more) to help you process the output (and send it to some other programs with pipes). There's operations like in-line evaluation ($(...)) to help you run a command and give it to another command as an argument.
Concretely, lets say your program list-my-data produces output for your program. It looks like this (stuff after $ is what you type, rest is the output):
$ list-my-data
line 1
line 2
line 3
log file: /path/to/a/file.log
line 5
....
line 100000
You can extract the line the contains the log file by piping (|) it to grep and telling grep what to search for:
$ list-my-data | grep 'log file:'
log file: /path/to/a/file.log
From this, you can extract the path to the log file by piping the output to sed and asking sed to remove the extra stuff in the line:
$ list-my-data | grep 'log file:' | sed -e 's|log file: ||'
/path/to/a/file.log
You can now pass this line to more (or better, less) by evaluating it and passing it as an argument:
$ less $(list-my-data | grep 'log file:' | sed -e 's|log file: ||')
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a script that I have copied and edited. There are a couple of lines in there that I need explaining if possible please.
These are the lines:
read -p "please enter the username you wish to create: " username
if id -u $username >/dev/null 2>&1; then
What does read -p do? What does id -u do? What does >/dev/null 2&1; do?
Then further on in the script, it has this line that says this:
sudo useradd -g $group -s $bash -d $homedir -m $username -p $password
Again please could someone explain all the minus signs in this line please? (-g, -s, -d, -m, -p)
First off, the structure <command> -<option> means that you want to execute <command> using the option corresponding to <option>. A - after a command means that the following letter is an option. Most commands have several options you can use. Options are usually defined using either a single letter or a couple of words separated by -.
Side Note: For options that are a couple of words rather than a single letter, often it will use two minus signs -- instead of one, signifying that it is a "long named" option.
So, using the read -p example, this means you want to execute read using the p option, which stands for prompt.
Now, sometimes an option will require an argument. In your examples, the options to useradd have arguments. Arguments are usually defined like <command> -<option> [argument]. So, in the useradd example, $group is an argument for the option g.
Now for the commands themselves:
read is a bash built-in (not a POSIX shell command) that reads from standard input.
The -p option makes it read as a prompt, meaning it doesn't add a trailing newline before trying to read input.
if checks the return status of the test command (in this case id -u $username >/dev/null 2>&1)
If the return status is 0, the then part is executed
id prints user groups and ids
The -u option "prints only the effective user ID".
>/dev/null 2>&1 redirects standard input and standard error to /dev/null, meaning they do not get printed to the terminal.
useradd creates a new user
-g sets the initial group for the user
-s sets the name of the user's login shell
-d sets the name of the user's login directory
-m says to create the user's home directory if it does not exist.
-p defines the user's encrypted password.
For future reference, you can look up commands in the linux manual pages by doing man <command> on the command line. These manual pages tell you what a command does, as well as explaining all of its options.
Bash built-ins like read are all on a single man page that is not the easiest thing to use. For those I find googling them easier. Usually http://ss64.com/ will come up in the results, which contains the info from the bash built-ins man page, but separated into different pages by command. I find this much easier to use.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to get into writing scripts that execute in the terminal. I wanted to know what the best way to do this was. I want to start by making a simple script that will run four or five commands that will update a certain program on my computer and have that run every day at a certain time. I have a programming background, but I am unfamiliar with this kind of scripting. I would appreciate any advice or input such as what language to use.
First of all, you need to open a Terminal (such as Terminal, Terminator, etc) and then you run this:
touch myScript.sh
chmod 755 myScript.sh
The first command creates an empty file and then you give 755 permissions to it. It means that it will be readable and executable by any user in your machine. If you need more details about those permissions, you can refer to the documentation here. But, believe me, those permissions will work for the moment.
Now you can insert instructions into the file using several methods: You can open it with a text editor such as vi, etc; Also, you can echo those commands this way:
echo "ls /tmp" >> myScript.sh
echo "echo 'hello'" >> myScript.sh
echo "pwd" >> myScript.sh
If you open that file, you can find that it is simply a list of commands one at each line. Then, when you run the script, each command will be executed in order from top to bottom.
You can run the script using the following sintax:
./myScript.sh
Voilá!
crontab -e
Then add following line beside the above command:
30 10 * * * script.sh
It will run everyday at 10:30 AM
This question already has answers here:
How to redirect output to a file and stdout
(11 answers)
Closed 10 years ago.
I am running a script called upgrade.sh
ANd upgrade.sh calls a script called roll.sh
roll.sh >> logfile.text
But roll.sh has some questions and prompts, and the redirect is preventing those outputs from hitting the screen. I cannot edit roll.sh.
I also tried `results=$(roll.sh)
Even then, the output was not coming onto the screen
Use tee, it was created specifically for this purpose: to forward standard input to the screen and one or more files. Make sure to use the -a option to append to logfile.text if you don't want to overwrite it.
roll.sh | tee -a logfile.text
You want tee:
TEE(1) User Commands TEE(1)
NAME
tee - read from standard input and write to standard output and files
A common way to handle that is to have the script write its prompts to stderr instead of stdout.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm attempting to execute the following series of commands to create backups of MySQL databases.
When I attempt to add the command to my crontab using crontab -e I get the error "errors in crontab file, cannot install" and asks me if I want to retry.
mkdir /home/mysql-backup/`date '+%m-%d-%Y'`; mysql -s -r -e 'show databases' | while read db; do mysqldump $db -r /home/mysql-backup/`date '+%m-%d-%Y'`/${db}.sql; done; rm -r -f `date --date="1 week ago" +%m-%d-%Y`; du -k |sort -n > output; mail -s "MySQL Backups" "steven#brightbear.net" < output
Is there anything I should be changing in this file? Or should I look into creating a script file and calling that from cron.
Thanks in advance for any assistance you can provide.
If you gave that script to crontab -e of course it will disagree. A line in a crontab file should start with 5 fields indicating when you want the script to run, as can be read in crontab's manpage.
On the other hand, most Linux distros nowadays have preset facilities for things that should be executed hourly (/etc/cron.hourly), daily (/etc/cron.daily), etc. It's a whole lot easier to just put your script in a file in the appropriate directory and it will get executed in the selected time raster. An added advantage is that in these files you won't be forced to cram everything into one line.
Yes; as a matter of style, if nothing else, I encourage to put the SQL commands into a shell script, and then run the shell script from cron. (And, as Anew points out, the command sequence is easier to maintain/debug if it’s broken out into multiple lines, with comments.) But –– is that all of what you’re feeding into crontab? Look at man crontab and add the fields that specify when you want the command to run.
From the crontab(5) man page, it looks like percent signs (%) have a special meaning, so that is probably where you're running into trouble.
Yes, you should put your commands into a separate shell script and just call that from the crontab line. This will also make it much easier to read the crontab file, and you can format your script nicely so that it's easier to maintain. And you can test it separately from the crontab that way.