Parsing part of a file name to use as a variable in bash - linux

I need to get the DB name and use it as a variable from part of a file name. Currently I have it pulling in the name of the file from a ls command. This is what it returns:
Test-20150311-1200.sql
I need the first part of the filename up to the "-" so for this example it would be Test. How can I get just part of the file name for my variable? Below is the full script I am working on. Any help would be great. Thanks.
#!/bin/bash
DB_USER=x
DB_PASS=x
DB=
for DB_GZFILE in $( ls *.sql.gz ); do
gunzip $DB_GZFILE
echo item: $DB_FILE unzipped
done
for DB_FILE in $( ls *.sql ); do
#Use this statement to insert dump into a new server
mysql $DB <$DB_FILE
done
#Use this command to insert into a server already in use
# mysql -u$DB_USER -p$DB_PASS <$DB_FILE
echo $DB_FILE inserted into database
done
#Remove sql files used to insert into this server
# rm $DB_FILE
# echo $DB_FILE removed
done
echo restarting the mysql process .....
# /etc/init.d/mysql restart
echo mysql restarted
done

Don't parse ls
for DB_GZFILE in *.sql.gz; do
for DB_FILE in *.sql; do
To get just the first part, use parameter substitution to remove the first - and all following characters:
first_part=${DB_FILE%%-*}
You should quote all your "$vars", especially any whose value you get from the user or from the filesystem: you never know when you'll get a filename with a space in it. Example:
gunzip "$DB_GZFILE"
I'd recommend you do not use ALL_CAPS_VARNAMES: one day you'll accidentally use PATH=... and then wonder why your script is broken. Leave ALL_CAPS for system environment variables and shell special vars.

Related

How does one create a wrapper around a program?

I want to learn to create a wrapper around a program in linux. How does one do this? A tutorial reference web-page/link or example will do. To clarify what I want to learn, I will explain with an example.
I use vim for editing text files. And use rcs as my simple revision control system. rcs allows you to check-in and checkout-files. I would like to create a warpper program named vir which when I type in the shell as:
$ vir temp.txt
will load the file temp.txt into rcs with ci -u temp.txt and then allows me to edit the file using vim.
When I get out and go back in, It will need to check out the file first, using ci -u temp.txt and allow me to edit the file as one normally does with vim, and then when I save and exit, it should check-in the file using co -u temp.txt and as part of that I should be able to add a version control comment.
Basically, all I want to be doing on the command line is:
$ vir temp.txt
as one would with vim. And the wrapper should take care of the version control for me.
Take a look at rcsvers.vim, a vim plugin for automatically saving versions in RCS; you could modify that. There are also other RCS plugins for vim at vim.org
I have a wrapper to enhance the ping command (using zsh) it could, maybe help you:
# ping command wrapper - Last Change: out 27 2019 18:47
# source: https://www.cyberciti.biz/tips/unix-linux-bash-shell-script-wrapper-examples.html
ping(){
# Name: ping() wrapper
# Arg: (url|domain|ip)
# Purpose: Send ping request to domain by removing urls, protocol, username:pass using system /usr/bin/ping
local array=( $# ) # get all args in an array
local host=${array[-1]} # get the last arg
local args=${array[1,-2]} # get all args before last arg in $#
#local _ping="/usr/bin/ping"
local _ping="/bin/ping"
local c=$(_getdomainnameonly "$host")
[ "$host" != "$c" ] && echo "Sending ICMP ECHO_REQUEST to \"$c\"..."
# pass args and host
# $_ping $args $c
# default args for ping
$_ping -n -c 2 -i 1 -W1 $c
}
_getdomainnameonly(){
# Name: _getdomainnameonly
# Arg: Url/domain/ip
# Returns: Only domain name
# Purpose: Get domain name and remove protocol part, username:password and other parts from url
# get url
local h="$1"
# upper to lowercase
local f="${h:l}"
# remove protocol part of hostname
f="${f#http://}"
f="${f#https://}"
f="${f#ftp://}"
f="${f#scp://}"
f="${f#scp://}"
f="${f#sftp://}"
# Remove username and/or username:password part of hostname
f="${f#*:*#}"
f="${f#*#}"
# remove all /foo/xyz.html*
f=${f%%/*}
# show domain name only
echo "$f"
}
What it hides the local ping using a function called "ping", so if your script has precedence on your path it will find at first the function ping. Then inside the script I define an internal variable called ping that points out to the real ping command:
local _ping="/bin/ping"
You can also notice that the args are stored in one array.

Assign full text file path to a variable and use variable as file path in sh file

I am trying to create a shell script for logs and trying to append data into a text file. I have write this sample "test.sh" code for testing:
#!/bin/sh -e
touch /home/sample.txt
SPTH = '/home/sample'.txt
echo "MY LOG FILE" >> "$SPTH"
echo "DUMP started at $(date +'%d-%m-%Y %H:%M:%S')" >> /home/sample.txt
echo "DUMP finished at $(date +'%d-%m-%Y %H:%M:%S')" >> /home/sample.txt
but in above code all lines are working correct except one line of code i.e.
echo "MY LOG FILE" >> "$SPTH"
It is giving error:
test.sh: line 6: : No such file or directory
I want to replace this full path of file "/home/sample.txt" to variable "$SPATH".
I am executing my shell script using
sh test.sh
What I am doing wrong.
Variable assignments in bash shell does not allow you to have spaces within. It will be actually interpreted as command with = and the subsequent keywords as arguments to the first word, which is wrong.
Change your code to
SPTH="/home/sample.txt"
That is the reason why SPTH was not assigned to the actual path you intended it to have. And you have no reason to have single-quote here and excluding the extension part. Using it fully within double-quotes is absolutely fine.
The syntax for the command line is that the first token is a command, tokens are separated by whitespace. So:
SPTH = '/home/sample'.txt
Has the command as SPTH, the second token is =, and so on. You might think this is daft, but most shells behave like this for historical reasons.
So you need to remove the whitespace:
SPTH='/home/sample'.txt

how to print the ouput/error to a text file?

I'm trying to redirect(?) my standard error/output to a text file.
I did my research, but for some reason the online answers are not working for me.
What am I doing wrong?
cd /home/user1/lists/
for dir in $(ls)
do
(
echo | $dir > /root/user1/$dir" "log.txt
) > /root/Desktop/Logs/Update.log
done
I also tried
2> /root/Desktop/Logs/Update.log
1> /root/Desktop/Logs/Update.log
&> /root/Desktop/Logs/Update.log
None of these work for me :(
Help please!
Try this for the basics:
echo hello >> log.txt 2>&1
Could be read as: echo the word hello, redirecting and appending STDOUT to the file log.txt. STDERR (file descriptor 2) is redirected to wherever STDOUT is being pointed. Note that STDOUT is the default and thus there is no "1" in front of the ">>". Works on the current line only.
To redirect and append all output and error of all commands in a script, put this line near the top. It will be in effect for the length of the script instead of doing it on each line:
exec >>log.txt 2>&1
If you are trying to obtain a list of the files in /home/user1/lists, you do not need a loop at all:
ls /home/usr1/lists/ >Update.log
If you are attempting to run every file in the directory as an executable with a newline as its input, and collect the output from all these programs in Update.log, try this:
for file in /home/user1/lists/*; do
echo | "$file"
done >Update.log
(Notice how we avoid the useless use of ls and how there is no redirection inside the loop.)
If you want to create an empty file called *.log.txt for each file in the directory, you would do
for file in /home/user1/lists/*; do
touch "$(basename "$file")"log.txt
done
(Using basename to obtain the file name without the directory part avoids the cd but you could do it the other way around. Generally, we tend to avoid changing the directory in scripts, so that the tool can be run from anywhere and generate output in the current directory.)
If you want to create a file containing a single newline, regardless of whether it already exists or not,
for file in /home/user1/lists/*; do
echo >"$(basename "$file")"log.txt
done
In your original program, you redirect the echo inside the loop, which means that the redirection after done will not receive any output at all, so the created file will be empty.
These are somewhat wild guesses at what you might actually be trying to accomplish, but should hopefully help nudge you slightly in the right direction. (This should properly be a comment, I suppose, but it's way too long and complex.)

Use shell to load in variables to replace placeholders

I have a problem where my config files contents are placed within my deployment script because they get their settings from my setting.sh file. This causes my deployment script to be very large a bloated.
I was wondering if it would be possible in bash to do something like this
setting.sh
USER="Tom"
log.conf
log=/$PLACEHOLDER_USER/full.log
deployment.sh
#!/bin/bash
# Pull in settings file
. ./settings.sh
# Link config to right location
ln -s /home/log.conf /home/logging/log.conf
# Write variables on top of placeholder variables in the file
for $PLACEHOLDER_* in /home/logging/log.conf
do
(Replace $PLACEHOLDER_<VARAIBLE> with $VARIABLE)
done
I want this to work for any variable found in the config file which starts with $placeholder_
This process would allow me to move a generic config file from my repository and then add the proper variables from my setting file on top of the placeholder variables in the config.
I'm stuck on how I can get this to actually work using my deployment.sh.
This small script will read all variable lines from settings.sh and replace the PLACEHOLDER_xxx in file for each. Does this help you?
while IFS== read variable value
do
sed -i "s/\$PLACEHOLDER_$variable/$value/g" file
done < settings.sh
#!/usr/local/env bash
set -x
ln -s /home/log.conf /home/logging/log.conf
while read user
do
usertmp=$(echo "${user}" | sed s'#USER=\"##' \
sed s'#"$##')
user="${usertemp}"
log="${user}"/full.log
done < setting.sh
I don't really understand the rest of what you're trying to do, I will confess, but this will hopefully give you the idea. Use read.

stdout all at once instead of line by line

I wrote a script that gets load and mem information for a list of servers by ssh'ing to each server. However, since there are around 20 servers, it's not very efficient to wait for the script to end. That's why I thought it might be interesting to make a crontab that writes the output of the script to a file, so all I need to do is cat this file whenever I need to know load and mem information for the 20 servers. However, when I cat this file during the execution of the crontab it will give me incomplete information. That's because the output of my script is written line by line to the file instead of all at once at termination. I wonder what needs to be done to make this work...
My crontab:
* * * * * (date;~/bin/RUP_ssh) &> ~/bin/RUP.out
My bash script (RUP_ssh):
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca
done
Thanks,
niefpaarschoenen
You can buffer the output to a temporary file and then output all at once like this:
outputbuffer=`mktemp` # Create a new temporary file, usually in /tmp/
trap "rm '$outputbuffer'" EXIT # Remove the temporary file if we exit early.
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca >> "$outputbuffer" # gather info to buffer file
done
cat "$outputbuffer" # print buffer to stdout
# rm "$outputbuffer" # delete temporary file, not necessary when using trap
Assuming there is a string to identify which host the mem/load data has come from you can update your txt file as each result comes in. Asuming the data block is one line long you could use
for comp in `cat ~/bin/servers`; do
output=$( ssh $comp ~/bin/ca )
# remove old mem/load data for $comp from RUP.out
sed -i '/'"$comp"'/d' RUP.out # this assumes that the string "$comp" is
# integrated into the output from ca, and
# not elsewhere
echo "$output" >> RUP.out
done
This can be adapted depending on the output of ca. There is lots of help on sed across the net.

Resources