How to get the filename from a http link in linux? - linux

In a shell script i have a variable $FILE_LINK, which contains the following string:
http://links.twibright.com/download/links-2.13.tar.gz
What i need is to get the filename from the link, and store it in a different variable, so the process would look similar to this:
Set variable $FILE_LINK
Get the last string after the last "/", in this case 'links-2.13.tar.gz'
Store the string in a variable $FILE_LINK_NAME
How i could achieve that?

If using BASH use:
file_link='http://links.twibright.com/download/links-2.13.tar.gz'
file_link_name="${file_link##*/}"
links-2.13.tar.gz
Or else use basename (not available on OSX):
file_link_name=$(basename "$file_link")
If not use this awk:
file_link_name=$(awk -F / '{print $NF}' <<< "$file_link")
Or using sed:
file_link_name=$(sed 's~.*/~~' <<< "$file_link")
PS: I'm avoiding all uppercase variable names in order to avoid clash with ENV variables.

LINK=http://links.twibright.com/download/links-2.13.tar.gz
FILE=`echo $LINK | awk -F "/" '{print $NF}'`
echo $FILE
The output is links-2.13.tar.gz
awk is a good tool for text processing.
https://en.wikipedia.org/wiki/AWK
-F set the separator
$NF means the last column

Related

How to get the rest of the Pattern using any linux command?

I am try to update a file and doing some transformation using any linux tool.
For example, here I am trying with awk.
Would be great to know how to get the rest of the pattern?
awk -F '/' '{print $1"/raw"$2}' <<< "string1/string2/string3/string4/string5"
string1,rawstring2
here I dont know how many "/" is there and I want to get the output:
string1/rawstring2/string3/string4/string5
Something like
awk -F/ -v OFS=/ '{ $2 = "raw" $2 } 1' <<< "string1/string2/string3/string4/string5"
Just modify the desired field, and print out the changed line (Have to set OFS so it uses a slash instead of a space to separate fields on output, and a pattern of 1 uses the default action of printing $0. It's an idiom you'll see a lot of with awk.)
Also possible with sed:
sed -E 's|([^/]*/)|\1raw|' <<< "string1/string2/string3/string4/string5"
The \1 in the replacement string reproduces the bit inside parenthesis and appends raw to it.
Equivalent to
sed 's|\([^/]*/\)|\1raw|' <<< "string1/string2/string3/string4/string5"

Using awk command in Bash

I'm trying to loop an awk command using bash script and I'm having a hard time including a variable within the single quotes for the awk command. I'm thinking I should be doing this completely in awk, but I feel more comfortable with bash right now.
#!/bin/bash
index="1"
while [ $index -le 13 ]
do
awk "'"/^$index/ {print}"'" text.txt
done
Use the standard approach -- -v option of awk to set/pass the variable:
awk -v idx="$index" '$0 ~ "^"idx' text.txt
Here i have set the variable idx as having the value of shell variable $index. Inside awk, i have simply used idx as an awk variable.
$0 ~ "^"idx matches if the record starts with (^) whatever the variable idx contains; if so, print the record.
awk '/'"$index"'/' text.txt
# A lil play with the script part where you split the awk command
# and sandwich the bash variable in between using double quotes
# Note awk prints by default, so idiomatic awk omits the '{print}' too.
should do, alternatively use grep like
grep "$index" text.txt # Mind the double quotes
Note : -le is used for comparing numerals, so you may change index="1" to index=1.

How to use a variable in a command that's creating another variable

I created my basic variable from a read command (I've done this manually and using a script):
read NAME
I then want to use that NAME variable to search a file and create another variable:
STUDENT=$(grep $NAME <students.dat | awk -F: '/$NAME/ {print $1}')
If I run the command manually with an actual name from that students.dat file (and not $NAME), it executes and displays what I want. However, when I run this command (manually or from the script using $NAME), it returns blank, and I'm not sure why.
#user1615415: Try:
cat script.ksh
echo "Enter name.."
read NAME
STUDENT=$(awk -vname="$NAME" -F: '($0 ~ name){print $3}' student.dat)
Shell variables aren't interpolated in single quotes, only double quotes.
STUDENT=$(grep $NAME <students.dat | awk -F: "/$NAME/ {print \$1}")
$1 needs to be escaped to ensure it's not expanded by the shell, but by awk.

using grep lookup/cut function instead of source to load config file in bash

I have a script that I'm using now that loads all my config variables in by means of source command. It's simple quick and effective. But i understand that it's not a very secure option.
I've read that I can use the $include directive to achieve the same results. Is that any different or safer than source or are they essentially the same?
As a final alternative if the above two options are not safe ones, I'm trying to understand a lookup function I found in a shell scripting book. It basically used grep, a delimiter and cut to perform a variable name lookup from the file and retrieve the value. This seems safe and I can use it to modify my scripts.
It almost works as is. I think I just need to change the delimiter to "=" from $TAB but I'm not sure how it works or if it even will.
My config file format:
Param=value
Sample function (from notes)
lookup() {
grep "^$1$TAB" "$2" | cut -f2
}
Usage:
lookup [options] KEY FILE
-f sets field delimiter
-k sets the number of field which has key
-v specifies which field to return
I'm using Debian version of Raspbian Jessie Lite in case that matters on syntax.
Instead of grep and cut you should consider using awk that can do both search and cut operations based on a given delimiter easily:
lookup() {
key="$1"
filename="$2"
awk -F= -v key="$key" '$1 == key{print $2}' "$filename"
# use this awk if = can be present in value part as well
# awk -v key="^$key=" '$0 ~ key && sub(key, "")' "$filename"
}
This can be called as:
lookup search configfile.txt
-F= sets delimiter as = for awk command.
Also note that $1 and $2 inside single quotes are columns #1 and #2 and one shouldn't be confused with positional shell variables $1, $2 etc.
You should look into getopts to make it accept -f, -k etc type arguments.

passing awk variable to bash script

I am writing a bash/awk script to process hundreds of files under one directory. They all have name suffix of "localprefs". The purpose is to extract two values from each file (they are quoted by ""). I also want to use the same file name, but without the name suffix.
Here is what I did so far:
#!/bin/bash
for file in * # Traverse all the files in current directory.
read -r name < <(awk ' $name=substr(FILENAME,1,length(FILENAME)-10a) END {print name}' $file) #get the file name without suffix and pass to bash. PROBLEM TO SOLVE!
echo $name # verify if passing works.
do
awk 'BEGIN { FS = "\""} {print $2}' $file #this one works fine to extract two values I want.
done
exit 0
I could use
awk '{print substr(FILENAME,1,length(FILENAME)-10)}' to extract the file name without suffix, but I am stuck on how to pass that to bash as a variable which I will use as output file name (I read through all the posts on this subject here, but maybe I am dumb none of them works for me).
If anyone can shed a light on this, especially the line starts with "read", you are really appreciated.
Many thanks.
Try this one:
#!/bin/bash
dir="/path/to/directory"
for file in "$dir"/*localprefs; do
name=${file%localprefs} ## Or if it has a .: name=${file%.localprefs}
name=${name##*/} ## To exclude the dir part.
echo "$name"
awk 'BEGIN { FS = "\""} {print $2}' "$file" ## I think you could also use cut: cut -f 2 -d '"' "$file"
done
exit 0
To just take sbase name, you don't even need awk:
for file in * ; do
name="${file%.*}"
etc
done

Resources