BASH loop usage - linux

I'm currently learning BASH scripting and I have a question about the IF / WHILE / UNTIL loop statement. I'm trying to learn which one of those is more efficient for checking the contents of a variable for a text-statement, and if not found Can I use the until statement to check a variable? Example usage is like this (I'm using it to check if a system is updated, if not, it updates):
#!/bin/bash
# Flush the YUM cache since we've added new directories and repo's
flushcache=$(yum clean all);
# Does our system need to be updated?
checkupdate=$(yum update | grep -i "No Packages marked for Update");
# This will update the system
updatesystem=$(yum update -y);
# Flush the YUM cache, to make sure we get the newest package list
echo "$flushcache";
# Using a LOOP (until-logic), lets make sure we're all updated!
if [[ $checkupdate != "No packages marked for update" ]]
then
echo "$updatesystem"
else
echo "They system is already updated";
fi
exit 0;
The script exists normally, so that's good, but I'm wanting to know if I'm implementing my new learnz in the most efficient way possible. Also, will this loop around until $checkupdate is a true statement? I'd love to hear some professional input!
Any help is still help! Thanks for indulging me!

Related

how to extend a command without changing the usage

I have a global npm package that provided by a third party to generate a report and send it to server.
in_report generate -date 20221211
And I want to let a group of user to have the ability to check whether the report is generated or not, in order to prevent duplication. Therefore, I want to run a sh script before executing the in_report command.
sh check.sh && in_report generate -date 20221211
But the problem is I don't want to change the command how they generate the report. I can do a patch on their PC (able to change the env path or etc).
Is it possible to run sh check.sh && in_report generate -date 20221211 by running in_report generate -date 20221211?
If this "in_report" is only used for this exact purpose, you can create an alias by putting the following line at the end of the ".bashrc" or ".bash_aliases" file that is used by the people who will need to run in_report :
alias in_report='sh check.sh && in_report'
See https://doc.ubuntu-fr.org/alias for details.
If in_report is to be used in other ways too, this is not the solution. In that case, you may want to call it directly inside check.sh if a certain set of conditions on the parameters are matched. To do that :
alias in_report='sh check.sh'
The content of check.sh :
#!/bin/sh
if [[ $# -eq 3 && "$1" == "generate" && "$2" == "-date" && "$3" == "20"* ]] # Assuming that all you dates must be in the 21st century
then
if [[ some test to check that the report has not been generated yet ]]
then
/full/path/to/the/actual/in_report "$#" # WARNING : be sure that nobody will move the actual in_report to another path
else
echo "This report already exists"
fi
else
/full/path/to/the/actual/in_report "$#"
fi
This sure is not ideal but it should work. But by far the easiest and most reliable solution if applicable would be to ignore the aliasing thing and tell those who will use in_report to run your check.sh instead (with the same parameters as they would put to run in_report), and then you can directly call in_report instead of the /full/path/to/the/actual/in_report.
Sorry if this was not very clear. In that case, feel free to ask.
On most modern Linux distros the easiest would be to place a shell script that defines a function in /etc/profile.d, e.g. /etc/profile.d/my_report with a content of
function in_report() { sh check.sh && /path/to/in_report $*; }
That way it gets automatically placed in peoples environment when they log in.
The /path/to is important so the function doesn't call itself recursively.
A cursory glance through doco for the Mac suggests that you may want to edit /etc/bashrc or /etc/zshrc respectively.

Integrate a built-in Update function in shell in order to receive OTA updates when there are available

i'm struggled here with this thing that it would be awesome if it's gonna be integrated.
Well my idea is to, create a function which it will be runned at a certain time which is gonna to check if there is a new version of the script . But i don't know how to put the commands together.
I already have a sort of sketch Here:
SCRIPT_NAME="$0"
ARGS="$#"
NEW_FILE="/tmp/blog.sh"
VERSION="1.0"
check_upgrade () {
# check if there is a new version of this file
# here, hypothetically we check if a file exists in the disk.
# it could be an apt/yum check or whatever...
[ -f "$NEW_FILE" ] && {
# install a new version of this file or package
# again, in this example, this is done by just copying the new file
echo "Found a new version of me, updating myself..."
cp "$NEW_FILE" "$SCRIPT_NAME"
rm -f "$NEW_FILE"
# note that at this point this file was overwritten in the disk
# now run this very own file, in its new version!
echo "Running the new version..."
$SCRIPT_NAME $ARGS
# now exit this old instance
exit 0
}
I know it's possible to do this, but i didn't found anything useful on internet.
Every advice will be much appreciated.
Assuming the script is always running, make another script that curls the file and checks it against the original. Something like:
if [ version newer ]; then
kill old verseion
mv "new version" 'old version"
./new version
else:
delete tmp file
fi
Run it with cron at intervals you see fit

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

Uninstall softwares one after another using shell script

I want to uninstall two software's based on completion of first uninstaller. Mean to say , I don't want to start second uninstaller until we complete the first uninstaller.
Can anyone please suggest me how can I achieve this scenario.
This is what I followed now.
uninstall.sh:
if [ $exitval -eq 0 ] then
./uninstall1.sh
else
echo uninstall1.sh else loop
fi
result=$?
if [ $result -eq 0 ]
./uninstall2.sh
else
echo uninstall2.sh else loop
fi
Here the issue is , uninstaller1 will launch one UI. Before completion of uninstaller1, uninstaller2 UI will get launch. This is what I don't want.
Want to launch uninstall2 when uninstall1 gets finish.
Update : After goggling came to know that we can achieve this by using wait command. But, still struggling with the same issue .
Thanks In Advance.
Anyhow I'd just post my pending suggestion:
SomeLauncher1.sh
PID=$! ## Not really the way to do it but this is one way how.
while kill -s 0 "$PID"; do ## If true, process is still running.
sleep 1s ## Keep waiting.
done
SomeLauncher2.sh
... ## Perhaps do the same thing again.

How to check Linux version with Autoconf?

My program requires at least Linux 2.6.26 (I use timerfd and some other Linux-specific features).
I have an general idea how to write this macro but I don't have enough knowledge about writing test macros for Autoconf. Algorithm:
Run "uname --release" and store output
Parse output and subtract Linux version number (MAJOR.MINOR.MICRO)
Compare version
I don't know how to run command, store output and parse it.
Maybe such macro already exists and it's available (I haven't found any)?
I think you'd be better off detecting the specific functions you need using AC_CHECK_FUNC, rather than a specific kernel version.
This will also prevent breakage if you find yourself cross-compiling at some point in the future
There is a macro for steps 2 (parse) and 3 (compare) version, ax_compare_version. For example:
linux_version=$(uname --release)
AX_COMPARE_VERSION($linux_version, [eq3], [2.6.26],
[AC_MSG_NOTICE([Ok])],
[AC_MSG_ERROR([Bad Linux version])])
Here I used eq3 so that if $linux_version contained additional strings, such as -amd64, the comparison still succeeds. There is a plethora of comparison operators available.
I would suggest you not to check the Linux version number, but for the specific type you need or function. Who knows, maybe someone decides to backport timerfd_settime() to 2.4.x? So I think AC_CANONICAL_TARGET and AC_CHECK_LIB or similar are your friends. If you need to check the function arguments or test behaviour, you'd better write a simple program and use AC_LANG_CONFTEST([AC_LANG_PROGRAM(...)])/AC_TRY_RUN to do the job.
Without going too deep and write autoconf macros properly (which would be preferable anyway) don't forget that configure.ac is basically a shell script preprocessed by m4. So you can write shell commands directly.
# prev. part of configure.ac
if test `uname -r |cut -d. -f1` -lt 2 then; echo "major v. error"; exit 1; fi
if test `uname -r |cut -d. -f2` -lt 6 then; echo "minor v. error"; exit 1; fi
if test `uname -r |cut -d. -f3` -lt 26 then; echo "micro error"; exit 1; fi
# ...
This is just an idea if you want to do it avoiding writing macros for autoconf. This choice is not good, but should work...
The best way is the already suggested one: you should check for features; so, say in a future kernel timerfd is no more available... or changed someway your code is broken... you won't catch it since you test for version.
edit
As user foof says in comments (with other words), it is a naive way to check for major.minor.micro. E.g. 3.5.1 will fail because of 5 being lt 6, but 3.5.1 comes after 2.6.26 so (likely) it should be accepted. There are many tricks that can be used in order to transform x.y.z into a representation that puts each version in its "natural" order. E.g. if we expect x, y, or z won't be greather than 999, we can do something like multiplying by 1000000 major, 1000 minor and 1 micro: thus, you can compare the result with 2006026 as Foof suggested in comment(s).

Resources