Appending to file with sudo access - linux

I am trying to append line to an existing file owned by root and I have to do this task with about 100 servers. So I created servers.txt with all the IPs and the ntp.txt file which will have the lines that I need to append. I am executing the following script and I am not achieving what I am trying to. Can someone please suggest what needs to be corrected?
!/bin/bash
servers=`cat servers.txt`;
for i in $servers;
do
cat ntp.txt | ssh root#${i} sudo sh -c "cat >>ntp.conf""
done

Here are some issues; not sure I found all of them.
The shebang line lacks the # which is significant and crucial.
There is no need to read the server names into a variable, and in addition to wasting memory, you are exposing yourself to a number of potential problems; see https://mywiki.wooledge.org/DontReadLinesWithFor
Unless you specifically require the shell to do whitespace tokenization and wildcard expansion on a value, put it in double quotes (or even single quotes, but this inhibits variable expansion, which you still want).
If you are logging in as root, no need to explicitly sudo.
ssh runs a shell for you; no need to explicitly sh -c your commands.
On the other hand, you want to avoid running a root shell if you can. A common way to get to append to a file without having to spawn a shell just to be able to redirect is to use tee -a (just drop the -a to overwrite instead of append). Also printing the file to standard output is an undesired effect (some would say the main effect rather than side effect of tee but let's just not go there) so you often redirect to /dev/null to avoid having the text also spill onto your screen.
You probably want to avoid a useless use of cat if only just to avoid having somebody point out to you that it's useless.
#!/bin/bash
while read -r server; do
do
ssh you#"$server" sudo tee -a /etc/ntp.conf <ntp.txt >/dev/null
done <servers.txt
I changed the code to log in as you but it's of course something you will need to adapt to suit your environment. (If you log in as yourself, you usually just ssh server without explicitly specifying a user name.)
As per your comment, I also added a full path to the destination file /etc/ntp.conf
A more disciplined approach to server configuration is to use something like CFengine2 to manage configurations.

Related

Redirect output from command to terminal and a file without using tee

As the name implies, I want to log the output of a command to a file without changing the terminal behavior. I still want to see the output and importantly, I still need to be able to do input.
The input requirement is why I specifically cannot use tee. The application I am working with doesn't handle input properly when using tee. (No, I cannot modify the application to fix this). I'm hoping a more fundamental approach with '>' redirection gets around this issue.
Theoretically, this should work exactly as I want, but, like I said, it does not.
command | tee -a foo.log
Also notice I added the -a flag. Not strictly required because I can definitely do a work around but it'd be nice if that was a feature as well.
Use script.
% script -c vi vi.log
Script started, output log file is 'vi.log'.
... inside vi command - fully interactive - save the file, takes to:
Script done.
% ls -l vi.log
-rw-r--r-- 1 risner risner 889 Sep 13 15:16 vi.log

Proxying commands across a SSH connection

I'd like to be able to "spoof" certain commands on my machine, actually invoking them on a remote system. For example. Whenever I run:
cmd options
I'd like the actual command to be:
ssh user#host cmd options
Ideally I'd like to have a folder called spoof, add it to my PATH, and have an executable in there called cmd which does the spoofing. If I have a lot of commands, this could get tedious. Anyone have ideas of a good way to go about this? Such that I can add and remove a lot of commands in the future? And, I'd like to be able to pass all the arguments exactly (or as exact as possible) and every single command I want to spoof would just have the ssh user#host in front of it.
The reason for this is I'm running a container (specifically singularity) on my machine, and there are certain commands I don't really want to containerize, but still want to run from within the container. I've found I can get the functionality I want by just appending the ssh in front of it. Examples are sbatch and matlab which are a pain to containerize and I'm fine with just using ssh to call them. Files that these programs use are written to a bind point so the host machine can see them just fine.
The following script can be hardlinked under all the names of commands you wish to transparently proxy:
#!/usr/bin/env bash
printf -v str '%q ' "${0##*/}" "$#"
ssh host "$str"
SSH combines all its arguments into a single string, which is then executed by a remote shell. To ensure that the remote arguments are identical to the local one, the values need to be escaped; otherwise, somecommand "hello world" and somecommand "hello" "world" can be represented identically over-the-wire.
In an appropriately extended printf (including both bash and ksh implementations), %q is replaced with an escaped form of the corresponding value, which will be evaled back to the original (literal) text by if interpreted later.
printf -v varname stores the output of printf in a variable named varname without the overhead/inefficiency of a command substitutions. (In ksh93, varname=$(printf ...) is optimized to skip subshell overhead, so this is not necessary there).
$0 evaluates to argv[0], which is by convention the name of the command currently being run. (This can be overridden, but you trust your users to behave reasonably... right?)
${0##*/} is a parameter expansion which returns only content after the last / in $0 (should it in fact contain any slashes; otherwise, the original value is used unmodified).
"$#" refers to the exact argument vector passed to your script.

Bad substitution in bash script

I'm trying to get a script working to add swap space to a VPS, as a workaround a la this method. I thought I had it working but now, each time I get the error: fakeswap.sh: 5: Bad substitution whenever I try to execute it thusly: sudo sh fakeswap.sh.
Below is my code:
#!/bin/bash
SWAP="${1:-512}"
NEW="$[SWAP*1024]"; TEMP="${NEW//?/ }"; OLD="${TEMP:1}0"
umount /proc/meminfo 2> /dev/null
sed "/^Swap\(Total\|Free\):/s,$OLD,$NEW," /proc/meminfo > /etc/fake_meminfo
mount --bind /etc/fake_meminfo /proc/meminfo
free -m
Clearly the substitution that seems to be failing is on the line: NEW="$[SWAP*1024]"; TEMP="${NEW//?/ }"; OLD="${TEMP:1}0"
I'm somewhat ashamed to say that I don't REALLY understand what's supposed to happen on that line (apart from the fact that we seem to be declaring variables that are all derivatives of SWAP in one way or another). I gather that the lines below substitute new constants into a dummy configuration file (for lack of a better term) but I don't get how the variables TEMP and OLD are being defined.
Anyway, I was wondering if anyone might be able to see why this substitution isn't working...and maybe even help me understand what might be be happening when TEMP and OLD are defined?
Many thanks in advance!
sh is not bash. The sh shell doesn't recognize some valid bash substitutions.
The intention of that script is that it be executable. You would do that by
chmod a+x fakeswap.sh
after which you can run it simply by typing
./fakeswap.sh
(assuming it is in the current working directory; if not, use the full path.)
By the way, TEMP is a number of spaces equal to the length of NEW and OLD is the result of changing the last space in TEMP to 0. So OLD and NEW have the same length, meaning that the sed substitution won't change the size of the file.

I want to get a tip of rm command filter by using bash script

Some weeks ago, a senior team member removed an important oracle database file(.dbf) unexpectedly. Fortunately, We could restore the system by using back-up files which was saved some days ago.
After seeing that situation, I decided to implement a solution to make atleast a double confirmation when typing rm command on the prompt. (checks more than rm -i)
Even though we aliasing rm -i as default, super speedy keyboardists usually make mistakes like that member, including me.
At first, I replaced(by using alias) basic rm command to a specific bash script file which prints and confirms many times if the targets are related on the oracle database paths or files.
simply speaking, the script operates as filter before to operate rm. If it is not related with oracle, then rm will operate as normal.
While implementing, I thought most of features are well operated as I expected only user prompt environment except one concern.
If rm command are called within other scripts(provided oracle, other vendor modifying oracle path, installer, etc) or programs(by using system call).
How can i distinguish that situation?
If above provided scripts met modified rm, That execution doesn't go ahead anymore.
Do you have more sophisticated methods?
I believe most of reader can understand my lazy explanation.
If you couldn't get clear scenery from above, let me know. I will elaborate more.
We read at man bash:
Aliases are not expanded when the shell is not interactive, unless the
expand_aliases shell option is set using shopt.
Then if you use alias to make rm invoke your shell script, other scripts won't use it by default. If it's what you want, then you're already safe.
The problem is if you want your version of rm to be invoked by scripts and do something smart when it happens. Alias is not enough for the former; even putting your rm somewhere under $PATH is not enough for programs explicitly calling /bin/rm. And for programs that aren't shell scripts, unlink system call is much more likely to be used than something like system("rm ...").
I think that for the whole "safe rm" thing to be useful, it should avoid prompts even when invoked interactively. Every user will develop the habit of saying "yes" to it, and there is no known way around that. What might work is something that moves files to recycle bin instead of deletion, making damage easy to undo (as I seem to recall, there were ready to use solutions for this).
The answer is into the alias manpage:
Note aliases are not expanded by default in non-interactive
shell, and it can be enabled by setting the expand_aliases shell
option using shopt.
Check it by yourself with man alias ;)
Anyway, i would do it in the same way you've chosen
To distinguish the situation: You can create an env variable say, APPL, which will be set to say export APPL="DATABASE . In your customized rm script, perform the double checkings only if the APPL is DATABASE (which indicates a database related script), not otherwise which means the rm call is from other scripts.
If you're using bash, you can export your shell function, which will make it available in scripts, too.
#!/usr/bin/env bash
# Define a replacement for `rm` and export it.
rm() { echo "PSYCH."; }; export -f rm
Shell functions take precedence over builtins and external utilities, so by using just rm even scripts will invoke the function - unless they explicitly bypass the function by invoking /bin/rm ... or command rm ....
Place the above (with your actual implementation of rm()) either in each user's ~/.bashrc file, or in the system-wide bash profile - sadly, its location is not standardized (e.g.: Ubuntu: /etc/bash.bashrc; Fedora /etc/bashrc)

Scripting on Linux

I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)

Resources