Appcmd commands exit with failures when they will have no effect e.g. trying to add a mime type that already exists.
Is there a way to either override existing values or be less strict with errors?
My use case is I'm using appcmd to deploy a website, and I'd like commands to run smoothly even if they have been run previously.
Assuming you're using CMD batch file, you could append
2>nul || echo command failed
to the end of your appcmd; this would produce less noise than the regular failures and should let it have a 0 returncode, I think. But short of ignoring the errors that happen, no, you can't tell appcmd itself to "try to do this but ignore it if it fails".
Related
I would like to configure my bash in a way so that I react on the event that the user enters a command. The moment they press Enter I would like my bash to run a script I installed first (analog to any PROMPT_COMMAND which is run each time a prompt is given out). This script should be able to
see what was entered,
maybe change it,
maybe even make the shell ignore it (i. e. make it not execute the line),
decide on whether the text shall be inserted in the history or not,
and maybe similar things.
I have not found a proper way to do this. My current implementations are all flawed and use things like debug traps to intervene before executing a command or (HISTTIMEFORMAT='%s '; history 1) to ask the history after the command execution is complete about things when the command was started etc (but that is only hindsight which is not really what I want).
I'd expect something like a COMMAND_INTERCEPTION variable which would work similar to PROMPT_COMMAND but I'm not able to find anything like it.
I also considered to use command line completion to achieve my goal but wasn't able to find anything about reacting on sending a finished command in this, but maybe I just didn't find it.
Any help appreciated :)
You can use the DEBUG trap and the extdebug feature, and peek into BASH_COMMAND from the trap handler to see the running command. (Though as noted in comments, the debug trap is sprung on every simple command, not every command line. Also subshells elude it.)
The debug handler can prevent the command from running, but can't change it directly. Though of course you could run any command inside the debugger, possibly using BASH_COMMAND and eval to build it and then tell the shell to ignore the original command.
This would prevent running anything starting with ls:
$ preventls() { case "$BASH_COMMAND" in ls*) echo "no!"; return 1 ;; esac; }
$ shopt -s extdebug
$ trap preventls DEBUG
$ ls -l
no!
Use trap - DEBUG to remove the trap. Tested on Bash 4.3.30.
I need to make sure there are no syntax errors on dhcpd.conf. If there are errors, I want to get what they are.
I can check for syntax errors with this command:
dhcpd -cf /path/to/dhcpd.conf
but that prints a lot of information in addition to the error I got. Another thing is that I don't want to run dhcpd, even there is no syntax error. I only want to check for syntax errors and see what they are.
Unfortunately, running dhcpd -tf /path/to/dhcpd.conf also didn't solve my problem.
The syntax you are looking for is
dhcpd -t -cf /path/to/dhcpd.conf
The -t option will do a config check:
If the -t flag is specified, the server will simply test the configuration file for correct syntax, but will not attempt to perform any network operations. This can be used to test the new configuration file automatically before installing it.
You do not need to use -cf if you are using the default config file path.
/usr/sbin/dhcpd -t
The one you tried with -tf /path/to/... is quite different and relates to tracing.
One thing not on the manual page, and not covered here yet, is that the '/usr/sbin/dhcpd -t' command uses the return value to indicate whether the configuration is correct or not.
If there are no errors, it will return zero. if there are syntax errors it will return non zero (1 for the test I did)
So you can use something like:
/usr/sbin/dhcpd -t
if [ $? -ne 0 ]; then
echo "Configuration has errors, aborting"
fi
/bin/systemctl restart isc-dhcp-server
To check if changes made to the configuration are valid before trying to restart the server with the new version.
Unfortunately I don't think there is any option to just display the errors. It would be possible to use a text parsing tool (awk, python etc) to remove the header lines (for the version I have, everything up to a line beginning with "For info"), and trailer lines (for the version I have, everything after a line saying "Configuration file errors encountered -- exiting") which would leave just the syntax error and location
I am new to shell scripting. I am trying to work through this.
> script to execute in cron (util.sh)
#!/bin/sh
HOST='ahostname'
PORT='3306'
USER='auser'
PASS='apassword'
DB='adatabase'
. /mnt/stor/backups/backup.sh
(I also tried source /mnt/stor/backups/backup.sh)
> script to execute (backup.sh)
When backup.sh is called (it does get called) it appears to simply be parsed and not executed. So no matter what I put in it I get messages like:
/mnt/stor/backups/backup.sh: line 8: date: command not found
/mnt/stor/backups/backup.sh: line 8: mysqldump: command not found
/mnt/stor/backups/backup.sh: line 8: tar: command not found
/mnt/stor/backups/backup.sh: line 8: rm: command not found
The idea is to have a domain localized file, execute it with variables, and call a master script that uses the variable to do the dirty work. Because of limitations with one of my hosts and multiple domains this is the best method.
The script with the Problem seems to be /mnt/stor/backups/backup.sh. Try setting the PATH to include all the usual directories with binaries, so the script can find its tools. Or, even better, change /mnt/stor/backups/backup.sh and use absolute paths in the commands like /bin/rm instead of just 'rm'.
When running from cron, you can't rely on any variables that are normally in your login shell's profile (e.g.: PATH, CLASSPATH, etc). You have to set explicitly what you need. In your case, i'm guessing that it's the lack of a PATH variable that's causing your troubles.
It's also good practice to put full paths to the programs you're executing from an unattended script, just to make sure you really are going to run that specific command, i.e., don't rely on the path.
So instead of
date
for example, use
/bin/date
etc.
Can you edit a shell script while it's running and have the changes affect the running script?
I'm curious about the specific case of a csh script I have that batch runs a bunch of different build flavors and runs all night. If something occurs to me mid operation, I'd like to go in and add additional commands, or comment out un-executed ones.
If not possible, is there any shell or batch-mechanism that would allow me to do this?
Of course I've tried it, but it will be hours before I see if it worked or not, and I'm curious about what's happening or not happening behind the scenes.
It does affect, at least bash in my environment, but in very unpleasant way. See these codes. First a.sh:
#!/bin/sh
echo "First echo"
read y
echo "$y"
echo "That's all."
b.sh:
#!/bin/sh
echo "First echo"
read y
echo "Inserted"
echo "$y"
# echo "That's all."
Do
$ cp a.sh run.sh
$ ./run.sh
$ # open another terminal
$ cp b.sh run.sh # while 'read' is in effect
$ # Then type "hello."
In my case, the output is always:
hello
hello
That's all.
That's all.
(Of course it's far better to automate it, but the above example is readable.)
[edit] This is unpredictable, thus dangerous. The best workaround is , as described here put all in a brace, and before the closing brace, put "exit". Read the linked answer well to avoid pitfalls.
[added] The exact behavior depends on one extra newline, and perhaps also on your Unix flavor, filesystem, etc. If you simply want to see some influences, simply add "echo foo/bar" to b.sh before and/or after the "read" line.
Try this... create a file called bash-is-odd.sh:
#!/bin/bash
echo "echo yes i do odd things" >> bash-is-odd.sh
That demonstrates that bash is, indeed, interpreting the script "as you go". Indeed, editing a long-running script has unpredictable results, inserting random characters etc. Why? Because bash reads from the last byte position, so editing shifts the location of the current character being read.
Bash is, in a word, very, very unsafe because of this "feature". svn and rsync when used with bash scripts are particularly troubling, because by default they "merge" the results... editing in place. rsync has a mode that fixes this. svn and git do not.
I present a solution. Create a file called /bin/bashx:
#!/bin/bash
source "$1"
Now use #!/bin/bashx on your scripts and always run them with bashx instead of bash. This fixes the issue - you can safely rsync your scripts.
Alternative (in-line) solution proposed/tested by #AF7:
{
# your script
exit $?
}
Curly braces protect against edits, and exit protects against appends. Of course, we'd all be much better off if bash came with an option, like -w (whole file), or something that did this.
Break your script into functions, and each time a function is called you source it from a separate file. Then you could edit the files at any time and your running script will pick up the changes next time it gets sourced.
foo() {
source foo.sh
}
foo
Good question!
Hope this simple script helps
#!/bin/sh
echo "Waiting..."
echo "echo \"Success! Edits to a .sh while it executes do affect the executing script! I added this line to myself during execution\" " >> ${0}
sleep 5
echo "When I was run, this was the last line"
It does seem under linux that changes made to an executing .sh are enacted by the executing script, if you can type fast enough!
An interesting side note - if you are running a Python script it does not change. (This is probably blatantly obvious to anyone who understands how shell runs Python scripts, but thought it might be a useful reminder for someone looking for this functionality.)
I created:
#!/usr/bin/env python3
import time
print('Starts')
time.sleep(10)
print('Finishes unchanged')
Then in another shell, while this is sleeping, edit the last line. When this completes it displays the unaltered line, presumably because it is running a .pyc? Same happens on Ubuntu and macOS.
I don't have csh installed, but
#!/bin/sh
echo Waiting...
sleep 60
echo Change didn't happen
Run that, quickly edit the last line to read
echo Change happened
Output is
Waiting...
/home/dave/tmp/change.sh: 4: Syntax error: Unterminated quoted string
Hrmph.
I guess edits to the shell scripts don't take effect until they're rerun.
If this is all in a single script, then no it will not work. However, if you set it up as a driver script calling sub-scripts, then you might be able to change a sub-script before it's called, or before it's called again if you're looping, and in that case I believe those changes would be reflected in the execution.
I'm hearing no... but what about with some indirection:
BatchRunner.sh
Command1.sh
Command2.sh
Command1.sh
runSomething
Command2.sh
runSomethingElse
Then you should be able to edit the contents of each command file before BatchRunner gets to it right?
OR
A cleaner version would have BatchRunner look to a single file where it would consecutively run one line at a time. Then you should be able to edit this second file while the first is running right?
Use Zsh instead for your scripting.
AFAICT, Zsh does not exhibit this frustrating behavior.
usually, it uncommon to edit your script while its running. All you have to do is to put in control check for your operations. Use if/else statements to check for conditions. If something fail, then do this, else do that. That's the way to go.
Scripts don't work that way; the executing copy is independent from the source file that you are editing. Next time the script is run, it will be based on the most recently saved version of the source file.
It might be wise to break out this script into multiple files, and run them individually. This will reduce the execution time to failure. (ie, split the batch into one build flavor scripts, running each one individually to see which one is causing the trouble).
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.