Altering the environment just for a single command is very simple:
DB=postgresql some_command --with --arguments
Unfortunately, I have to do this on a remote server and due to limitations of the deployment, I can only edit what comes after the some_command. The following would be nice, but doesn't do the trick (in Bash):
some_command --with --arguments DB=postgresql
Is there some other Bash hack to get there?
Here's another idea, a bit wild I'm afraid:
some_command --do-nothing `DB=postgresql some_command --now-really`
The idea is that the backquoted command will actually do what you want. The first some_command is only there so the command will start as you want it. You should find parameters that would make it do something harmless.
If you have nothing equivalent for the --do-nothing parameter, you can do this:
some_command `DB=postgresql some_command --now-really; ps-grep-kill`
Where ps-grep-kill is a combination of these commands (I leave the details as an exercise), which finds the parent process, which is just about to run some_command, and kills it before it gets a chance to (but after the backquoted some_command has run already).
Can you execute export DB=postgresql to modify the environmental variables globally? Then you can run subsequent commands that pick up the new environmental variable.
If you just run the command, it can't change the environment.
But if you source it, it can:
source some_command --with --arguments DB=postgresq
Shorthand:
. some_command --with --arguments DB=postgresql
I prefer not to source large scripts this way, because then they may end up changing more than you intended. So I'd write two scripts - one finds what you want to change, and outputs it. The other is a very small one, that runs the first one (normally, without source) and does the changes.
But re-reading your question, it seems like you can't use source. So I see no way to do what you want.
Related
In the middle of my perl script I want to execute a bash command. The script takes a long time, so at the beginning of the script I want to see if the command exists. This answer says to just try and run it and this other answer suggests some bash commands to test if the program exists.
Is the latter option the best solution? Are there any better ways to do this check in perl?
My best guess is that you want to check for existence of an executable file that you want to run using system or qx//
But if you want your command line to behave the same way as the shell, then you can probably use File::Which
What if we assume that we don't know the command's location?
This means that syck's answer won't work, and zdim's answer is incomplete.
Try this function in perl:
sub check_exists_command {
my $check = `sh -c 'command -v $_[0]'`;
return $check;
}
# two examples
check_exists_command 'pgrep' or die "$0 requires pgrep";
check_exists_command 'readlink' or die "$0 requires readlink";
I just tested it, because I just wrote it.
With perl, you can test files for existence, readability, executability etc., take a look here.
Therefore just use
executeBashStuff() if -x $filename;
or stat it:
stat($filename);
executeBashStuff() if -x _;
To me a better check is to run the program at the beginning of the script (with -V say).
I'd use the same invocation as you use to run the job later (via shell or not, via execvp). Once at it, make sure to see whether it threw errors. This is also discussed in your link but I would in fact get the output back (not send it away) and check that. This is the surest way to see whether the thing actually runs out of your program and whether it is what you expect it to be.
Checking for the executable with -x (if you know the path) is useful, too, but it only tells you that a file with a given name is there and that it is executable.
The system's which seems to be beset with critism for its possible (mis)behavior, it may or may not be a shell-builtin (which complicates how exactly to use it), is an external utility, and its exact behavior is system dependent. The module File::Which pointed out in Borodin's answer would be better -- if it is indeed better than which. (What it may well be, I just don't know.)
Note. I am not sure what "bash command" means: a bash shell built-in, or the fact that you use bash when on terminal? Perl's qx and system use the sh shell, not bash (if they invoke the shell, which depends on how you use them). While sh is mostly a link, and often to bash, it may not be and there are differences, and you cannot rely on your shell configuration.
Can also actually run a shell, qx(/path/bash -c 'cmd args'), if you must. Mind the quotes. You may need to play with it to find the exact syntax on your system. See this page and links.
Some weeks ago, a senior team member removed an important oracle database file(.dbf) unexpectedly. Fortunately, We could restore the system by using back-up files which was saved some days ago.
After seeing that situation, I decided to implement a solution to make atleast a double confirmation when typing rm command on the prompt. (checks more than rm -i)
Even though we aliasing rm -i as default, super speedy keyboardists usually make mistakes like that member, including me.
At first, I replaced(by using alias) basic rm command to a specific bash script file which prints and confirms many times if the targets are related on the oracle database paths or files.
simply speaking, the script operates as filter before to operate rm. If it is not related with oracle, then rm will operate as normal.
While implementing, I thought most of features are well operated as I expected only user prompt environment except one concern.
If rm command are called within other scripts(provided oracle, other vendor modifying oracle path, installer, etc) or programs(by using system call).
How can i distinguish that situation?
If above provided scripts met modified rm, That execution doesn't go ahead anymore.
Do you have more sophisticated methods?
I believe most of reader can understand my lazy explanation.
If you couldn't get clear scenery from above, let me know. I will elaborate more.
We read at man bash:
Aliases are not expanded when the shell is not interactive, unless the
expand_aliases shell option is set using shopt.
Then if you use alias to make rm invoke your shell script, other scripts won't use it by default. If it's what you want, then you're already safe.
The problem is if you want your version of rm to be invoked by scripts and do something smart when it happens. Alias is not enough for the former; even putting your rm somewhere under $PATH is not enough for programs explicitly calling /bin/rm. And for programs that aren't shell scripts, unlink system call is much more likely to be used than something like system("rm ...").
I think that for the whole "safe rm" thing to be useful, it should avoid prompts even when invoked interactively. Every user will develop the habit of saying "yes" to it, and there is no known way around that. What might work is something that moves files to recycle bin instead of deletion, making damage easy to undo (as I seem to recall, there were ready to use solutions for this).
The answer is into the alias manpage:
Note aliases are not expanded by default in non-interactive
shell, and it can be enabled by setting the expand_aliases shell
option using shopt.
Check it by yourself with man alias ;)
Anyway, i would do it in the same way you've chosen
To distinguish the situation: You can create an env variable say, APPL, which will be set to say export APPL="DATABASE . In your customized rm script, perform the double checkings only if the APPL is DATABASE (which indicates a database related script), not otherwise which means the rm call is from other scripts.
If you're using bash, you can export your shell function, which will make it available in scripts, too.
#!/usr/bin/env bash
# Define a replacement for `rm` and export it.
rm() { echo "PSYCH."; }; export -f rm
Shell functions take precedence over builtins and external utilities, so by using just rm even scripts will invoke the function - unless they explicitly bypass the function by invoking /bin/rm ... or command rm ....
Place the above (with your actual implementation of rm()) either in each user's ~/.bashrc file, or in the system-wide bash profile - sadly, its location is not standardized (e.g.: Ubuntu: /etc/bash.bashrc; Fedora /etc/bashrc)
I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)
I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;
i am novice to the Linux shell and had to recently start using it for work...i have now got used to the basic commands in bash to find my way around...however there are a lot of commands i find myself typing over and over again and its kind of a hassle to type them every time...so can anyone tell me how can i shorten the command syntax for ones i use frequently.
A very simple example, i use the ls -lh command often, though this is quite short but im just giving an example. Can I have something (a shell script may be) so that I can run it by typing just say lh.
I want to do it for more complex commands.
alias lh='ls -lh'
If you want to make this persistent across sessions, put it in your .bashrc file. Don't forget to run source .bashrc afterwards to make bash aware of the changes.
If you want to pass variables, an alias just isn't enough. You can make a function. As an example, consider the command lsall to list everything in a given directory (note this is just an example and thus very error prone):
function lsall
{
ls $1/*
}
$Ngets replaced with the Nth argument.
You would place the following alias in your .bashrc file:
alias lh='ls -lh'
Now lh is shorthand for ls -lh.
For more complicated tasks you could use a bash function. For example, on one of my machines I have a function which causes 'ls' to run after every successful 'cd':
cdls() {
builtin cd "$*" && ls
}
alias cd='cdls'
you can define aliases. For longer commands, use a function, put it into a library file and source it whenever you want to use your functions.
Just for the sake of completeness, since you want to learn bash: you could also write a function
lh() {
ls -lh "$#"
}
although I would never write that when a simple alias would do ;-)
;) Heh, I remember one problem when I was starting out on Linux, which is that I would ask questions like these, and people would diligently answer them, but no one would explain how to make such changes permanent, and so I found myself typing in a bunch of commands every time I opened a terminal.
So, even though others have accurately answered this question... if you want to make the change permanent, put the alias-line into your ~/.profile or ~/.bashrc file (~ = your home directory). It depends a bit on your distribution on which is run when, but I always try adding my aliases to ~/.profile first and if that doesn't work, then ~/.bashrc. One of them should work for sure.