Linux and wget.. specify alternate link - linux

Is it possible to make (via Linux command line tools ) wget download from an alternate link in case the download fails ?
Example:
Download file.zip from http://www.secondary.com/file.zip in case it's not found at http://www.primary.com/file.zip.

You can use a shell construct like this.
wget http://www.primary.com/file.zip || wget http://www.secondary.com/file.zip
The || is the OR operator, and depends on the fact that it "short circuits" the evaluation if the whole statement is true. This is a functional style, where the first statement is evaluated and if it's "true" (returns zero) then the second is not evaluated. If it's "false", the second is evaluated. The side effect of evaluating these commands is downloading the file.

Try something like this
wget_exit=$(wget "$MYURL")
if [ $? -ne 0 ];
then
wget "$MYALTURL"
fi
The exit status is captured in $?. You could also treat specific errors differently by looking at the exit statuses of wget here.

Related

how to extend a command without changing the usage

I have a global npm package that provided by a third party to generate a report and send it to server.
in_report generate -date 20221211
And I want to let a group of user to have the ability to check whether the report is generated or not, in order to prevent duplication. Therefore, I want to run a sh script before executing the in_report command.
sh check.sh && in_report generate -date 20221211
But the problem is I don't want to change the command how they generate the report. I can do a patch on their PC (able to change the env path or etc).
Is it possible to run sh check.sh && in_report generate -date 20221211 by running in_report generate -date 20221211?
If this "in_report" is only used for this exact purpose, you can create an alias by putting the following line at the end of the ".bashrc" or ".bash_aliases" file that is used by the people who will need to run in_report :
alias in_report='sh check.sh && in_report'
See https://doc.ubuntu-fr.org/alias for details.
If in_report is to be used in other ways too, this is not the solution. In that case, you may want to call it directly inside check.sh if a certain set of conditions on the parameters are matched. To do that :
alias in_report='sh check.sh'
The content of check.sh :
#!/bin/sh
if [[ $# -eq 3 && "$1" == "generate" && "$2" == "-date" && "$3" == "20"* ]] # Assuming that all you dates must be in the 21st century
then
if [[ some test to check that the report has not been generated yet ]]
then
/full/path/to/the/actual/in_report "$#" # WARNING : be sure that nobody will move the actual in_report to another path
else
echo "This report already exists"
fi
else
/full/path/to/the/actual/in_report "$#"
fi
This sure is not ideal but it should work. But by far the easiest and most reliable solution if applicable would be to ignore the aliasing thing and tell those who will use in_report to run your check.sh instead (with the same parameters as they would put to run in_report), and then you can directly call in_report instead of the /full/path/to/the/actual/in_report.
Sorry if this was not very clear. In that case, feel free to ask.
On most modern Linux distros the easiest would be to place a shell script that defines a function in /etc/profile.d, e.g. /etc/profile.d/my_report with a content of
function in_report() { sh check.sh && /path/to/in_report $*; }
That way it gets automatically placed in peoples environment when they log in.
The /path/to is important so the function doesn't call itself recursively.
A cursory glance through doco for the Mac suggests that you may want to edit /etc/bashrc or /etc/zshrc respectively.

Linux terminal one-liner command to evaluate the response of an HTTP request

I'm guessing this is a very simple question, sorry about that.
I need to execute a command based on the response of an HTTP request. It's just that it has to be a single line of command and no bash script (by that I mean a separate bash script file).
Here's a more concrete example. I have a local API that returns an integer if it is up and running:
$ curl -s http://localhost
1
Of course, for whatever reason the server might be down in which case the above command will return an empty string. Or it might be up but it returns 0. In either of these two cases, I need to execute a command to mitigate this situation (if you are interested, I'll be executing exit(1)). Otherwise, if the API returns 1 or a larger number, I don't need to do anything.
Can someone please help me come up with a one-liner for this? Thanks.
Use command substitution to get the returned value as a string, then you can compare it inside the test command.
Here's a one-liner, but it only recognizes 1 as a valid answer:
[ "$(curl -s http://localhost)" = "1" ] || exit 1
Here's one that allows any value at least 1 as valid, but I can't write it as a one-liner.
var=$(curl -s http://localhost)
if [ -z "$var" ] || [ "$var" -eq 0 ]
then exit 1
fi
I don't know how to do this as a one-liner, because it needs to do two different tests on the result: a string test to check for an empty result, and a numeric test for 0. That requires assigning to a variable or doing repeated curl requests.

How can I check for the presence of a single flag in tcsh?

I have a bash script that takes the command line flag called "--signal".
When this is specified on the command line, I want to run a line on the command prompt.
Is there a concise way of doing a brief if-statement to check? Getops adds a lot of clutter than I'm not too fond of.
if (--signal)
# Run this line
endif
Brute force but works. Replace found's value with a boolean and if that is true then spin off your process to send signal.
#!/bin/bash
found="didn't find it"
for v in "$#"
do
if [ $v == "--signal" ]
then
found="found it"
fi
done
echo $found
The question says "tcsh", but the description says "bash". And then in the comment you mention "tcsh" again. So, I assume you want it in tcsh.
If you don't worry about error checking, like throwing errors for other unsupported arguments etc, this simple if statement does what you want. i.e checks if "--signal" is passed to the script
if ("$*" =~ "*--signal*") then
# Whatever you want to do
endif

Why does behavior of set -e in a function change when that function is called in a compound command w/ || or &&?

I narrowed my problem to a simple example which puzzles me.
I have tested it with GNU bash 4.2.46 on Centos and 4.3.46 on Ubuntu.
Here is a bash function that returns non-zero (error) return code when called alone but reverses its behavior when I use either && or || to chain another command. It looks like a bug to me. Can someone explain why it is behaving as such?
$ echo $0
/bin/bash
$ function TEST() {( set -e; set -o pipefail; echo OK; false; echo NOT REACHED; )}
$ type TEST
TEST is a function
TEST ()
{
( set -e;
set -o pipefail;
echo OK;
false;
echo NOT REACHED )
}
$ TEST
OK
$ echo $?
1
$ TEST || echo "NON ZERO"
OK
NOT REACHED
$ echo $?
0
$ TEST && echo "UNEXPECTED"
OK
NOT REACHED
UNEXPECTED
$ echo $?
0
What you are seeing is the shell doing what it is specified to do. Non-zero return codes in if statements and loops, and || && logical operators do not trigger detection by set -e or traps. This makes serious error handling more difficult than in other languages.
The root of all problems is that, in the shell, there is no difference between returning a non-zero code as a meaningful and intended status, or as the result of a command failing in an uncontrolled manner. Furthermore the special cases the shell has will disable checking at all depths in the call stack, not just the first one, entirely hiding nested failures from set -e and traps (this is pure evil if you ask me).
Here is a short example that shows what I mean.
#!/bin/bash
nested_function()
{
returnn 0 ; # Voluntarily misspelled
}
test_function()
{
if
[[ some_test ]]
then
nested_function
else
return 1
fi
}
set -e
trap 'echo Will never be called' ERR
if
test_function
then
echo "Test OK"
else
echo "Test failed"
fi
There is an obvious bug in the first function. This function contains nothing that disables error checking, but since it is nested inside an if block (and not even directly, mind you), that error is completely ignored.
You do not have that problem in, say, Java, where a return value is one thing, and an exception is another thing, and where evaluating a return value in an if statement will not prevent an exception at any level in the call stack from doing its job. You have try/catch to handle exceptions, and there is no way to mix exceptions with return codes, they are fundamentally different things (exceptions can be used as return values, but do not trigger the exception mechanism then as when thrown).
If you want to have the same thing in shell programming, you have to build it for yourself. It can be done using a "try" function that is used in front of all calls and keeps state for each nested call, a "throw" equivalent that allows exceptions to be thrown (not as non-zero return codes, but stored inside variables), and trap ... ERR to intercept non-zero return codes and be able to do things like generate a stack trace and trigger a controlled exit (e.g. deleting temporary files, releasing other resources, performing notifications).
With this approach, "exceptions" are explicitly handled failures, and non-zero return codes are bugs. You trade a bit of performance I guess, it is not trivial to implement, and it requires a lot of discipline. In terms of ease of debugging and the level of complexity you can build in your script without being overwhelmed when trying to trace the source of a problem, it is a game changer though.
Handling error codes is the intended behavior of || and &&.
set -e is a great practice in Bash scripting to alert you to any unwanted errors. When using it sometime to chain commands like
set -e
possibly_failing_command || true
echo "This is always reached"
in order to avoid the program stopping.

Generate specific, non-zero return code?

I am working on some piece of python code that calls various linux tools (like ssh) for automation purposes. Right now I am looking into "return code" handling.
Thus: I am looking for a simple way to run some command that gives me a specific non-zero return code; something like
echo "this is a testcommand, that should return with rc=5"
for example. But of course, the above comes back with rc=0.
I know that I can call false, but this will always return with rc=1. I am looking for something that gives me an rc that I can control.
Edit: first answers suggest to exit; but the problem with that: exit is a bash function. So, when I try to run that from within a python script, I get "No such file or directory: exit".
So, I am actually looking for some "binary" tool that gives me that (obviously one can write some simple script to get that; I am just looking if there is something similar to false that is already shipped with any Linux/Unix).
Run exit in a subshell.
$ (exit 5) ; echo $?
5
I have this function defined in .bashrc:
return_errorcode ()
{
return $1
}
So, I can directly use something like
$ return_errorcode 5
$ echo $?
5
Compared to (exit 5); echo $? option, this mechanism saves you a subshell.
This is not exactly what you are asking but custom rc can be achieved through exit command.
echo "this is a test command, that should return with " ;exit 5
echo $?
5

Resources