Getting the Canonical Time Zone name in shell script - linux

Is there a way of getting the Canonical Time Zone name from a Linux shell script? for example, if my configured time zone is PDT, then I would like to get "America/Los_Angeles".
I know I could get from the symbolic link /etc/localtime if it were configured, but as it might not be configured in all servers I cannot rely on that one.
On the other hand, I can get the short time zone name with the command date +%Z, but I still need the canonical name.
Is there a way to either get the canonical name of the current time zone or transform the time zone gotten with the date +%Z command, even if the symbolic link /etc/localtime is not set?

This is more complicated than it sounds. Most linux distributions do it differently so there is no 100% reliable way to get the Olson TZ name.
Below is the heuristic that I have used in the past:
First check /etc/timezone, if it exists use it.
Next check if /etc/localtime is a symlink to the timezone database
Otherwise find a file in /usr/share/zoneinfo with the same content
as the file /etc/localtime
Untested example code:
if [ -f /etc/timezone ]; then
OLSONTZ=`cat /etc/timezone`
elif [ -h /etc/localtime ]; then
OLSONTZ=`readlink /etc/localtime | sed "s/\/usr\/share\/zoneinfo\///"`
else
checksum=`md5sum /etc/localtime | cut -d' ' -f1`
OLSONTZ=`find /usr/share/zoneinfo/ -type f -exec md5sum {} \; | grep "^$checksum" | sed "s/.*\/usr\/share\/zoneinfo\///" | head -n 1`
fi
echo $OLSONTZ
Note that this quick example does not handle the case where multiple TZ names match the given file (when looking in /usr/share/zoneinfo). Disambiguating the appropriate TZ name will depend on your application.
-nick

Related

Changing File name Dynamically in linux bash

I want to change a filename "Domain_20181012230112.csv" to "Domain_12345_20181012230112.csv" where "Domain" and "12345" are constants while 20181012230112 is always gonna change but with fix length. In bash how can I do this
If all you want is to replace Domain_ with Domain_12345_, then just do
for file in Domain_*;
do
mv "$file" "${file/Domain_/Domain_12345_}"
done
You can make it even shorter if you know that there will only be one underscore:
...
mv "$file" "${file/_/_12345_}"
...
See string substitutions for more info.
You can use mv in a for loop, like this:
for file in Domain_??????????????.csv ; do ts=`echo ${file} | cut -c8-21`; mv ${file} Domain_12345_${ts}.csv; done
Given the one file of your example, this will essentially execute this command
mv Domain_20181012230112.csv Domain_12345_20181012230112.csv
You can simply use the date command to get the date and time information you want
date '+%Y-%m-%d %H:%M:%S'
# 2018-10-26 10:25:47
To then use the result within the filename, you can put it in `` to evaluate it inline, for example you can run
echo "Domain_12345_`date '+%Y-%m-%d %H:%M:%S'`"
# Domain_12345_2018-10-26 10:29:17
You can use the date's man page to figure out the option for milliseconds to add es well.
man date
There are different options like %m and %d for example that always have leading zeroes if necessary, so the file name length stays constant.
To then rename the file you can use the mv (move) command
mv "Domain_20181012230112.csv" "Domain_12345_`date '+%Y-%m-%d %H:%M:%S'`.csv"
Good luck with the rest of the exercise!

How do i extract the date from multiple files with dates in it?

Lets say i have multiple filesnames e.g. R014-20171109-1159.log.20171109_1159.
I want to create a shell script which creates for every given date a folder and moves the files matching the date to it.
Is this possible?
For the example a folder "20171109" should be created and has the file "R014-20171109-1159.log.20171109_1159" on it.
Thanks
This is a typical application of a for-loop in bash to iterate thru files.
At the same time, this solution utilizes GNU [ shell param substitution ].
for file in /path/to/files/*\.log\.*
do
foldername=${file#*-}
foldername=${foldername%%-*}
mkdir -p "${foldername}" # -p suppress errors if folder already exists
[ $? -eq 0 ] && mv "${file}" "${foldername}" # check last cmd status and move
done
Since you want to write a shell script, use commands. To get date, use cut cmd like ex:
cat 1.txt
R014-20171109-1159.log.20171109_1159
cat 1.txt | cut -d "-" -f2
Output
20171109
is your date and create folder. This way you can loop and create as many folders as you want
Its actually quite easy(my Bash syntax might be a bit off) -
for f in /path/to/your/files*; do
## Check if the glob gets expanded to existing files.
## If not, f here will be exactly the pattern above
## and the exists test will evaluate to false.
[ -e "$f" ] && echo $f > #grep the file name for "*.log."
#and extract 8 charecters after "*.log." .
#Next check if a folder exists already with the name of 8 charecters.
#If not { create}
#else just move the file to that folder path
break
done
Main idea is from this post link. Sorry for not providing the actual code as i havent worked anytime recently on Bash
Below commands can be put in script to achieve this,
Assign a variable with current date as below ( use --date='n day ago' option if need to have an older date).
if need to get it from File name itself, get files in a loop then use cut command to get the date string,
dirVar=$(date +%Y%m%d) --> for current day,
dirVar=$(date +%Y%m%d --date='1 day ago') --> for yesterday,
dirVar=$(echo $fileName | cut -c6-13) or
dirVar=$(echo $fileName | cut -d- -f2) --> to get from $fileName
Create directory with the variable value as below, (-p : create directory if doesn't exist.)
mkdir -p ${dirVar}
Move files to directory to the directory with below line,
mv *log.${dirVar}* ${dirVar}/

shell - faster alternative to "find"

I'm writing a shell script wich should output the oldest file in a directory.
This directory is on a remote server and has (worst case) between 1000 and 1500 (temporary) files in it. I have no access to the server and I have no influence on how the files are stored. The server is connect through a stable but not very fast line.
The result of my script is passed to a monitoring system wich in turn allerts the staff if there are too many (=unprocessed) files in the directory.
Unfortunately the monitoring system only allows a maximun execution time of 30 seconds for my script before a timeout occurs.
This wasn't a problem when testing with small directories, this wasn't a problem. Testing with the target directory over the remote-mounted directory (approx 1000 files) it is.
So I'm looking for the fastest way to get things like "the oldest / newest / largest / smallest" file in a directory (not recursive) without using 'find' or sorting the output of 'ls'.
Currently I'm using this statement in my sh script:
old)
# return oldest file (age in seconds)
oldest=`find $2 -maxdepth 1 -type f | xargs ls -tr | head -1`
timestamp=`stat -f %B $oldest`
curdate=`date +%s`
echo `expr $(($curdate-$timestamp))`
;;
and I tried this one:
gfind /livedrive/669/iwt.save -type f -printf "%T# %P\n" | sort -nr | tail -1 | cut -d' ' -f 2-
wich are two of many variants of statements one can find using google.
Additional information:
I'writing this on a FreeBSD Box with sh und bash installed. I have full access to the box and can install programs if needed. For reference: gfind is the GNU-"find" utuility as known from linux as FreeBSD has another "find" installed by default.
any help is appreciated
with kind regards,
dura-zell
For the oldest/newest file issue, you can use -t option to ls which sorts the output using the time modified.
-t Sort by descending time modified (most recently modified first).
If two files have the same modification timestamp, sort their
names in ascending lexicographical order. The -r option reverses
both of these sort orders.
For the size issue, you can use -S to sort file by size.
-S Sort by size (largest file first) before sorting the operands in
lexicographical order.
Notice that for both cases, -r will reverse the order of the output.
-r Reverse the order of the sort.
Those options are available on FreeBSD and Linux; and must be pretty common in most implementations of ls.
Let use know if it's fast enough.
In general, you shouldn't be parsing the output of ls. In this case, it's just acting as a wrapper around stat anyway, so you may as well just call stat on each file, and use sort to get the oldest.
old) now=$(date +%s)
read name timestamp < <(stat -f "%N %B" "$2"/* | sort -k2,2n)
echo $(( $now - $timestamp ))
The above is concise, but doesn't distinguish between regular files and directories in the glob. If that is necessary, stick with find, but use a different form of -exec to minimize the number of calls to stat:
old ) now=$(date +%s)
read name timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f "%N %B" '{}' + | sort -k2,2n)
echo $(( $now - $timestamp ))
(Neither approach works if a filename contains a newline, although since you aren't using the filename in your example anyway, you can avoid that problem by dropping %N from the format and just sorting the timestamps numerically. For example:
read timestamp < <(stat -f %B "$2"/* | sort -n)
# or
read timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f %B '{}' + | sort -n)
)
Can you try creating a shell script that will reside in the remote host and when executed will provide the required output. Then from your local machine just use ssh or something like that to run that. In this way the script will run locally there. Just a thought :-)

How to find bash commands?

I guess it's pretty simple.
I just want to locate a bash command. For example when I want to know which commands are existing, containing the phrase "user".
So the command I am looking for should print me wether the location of all commands containing user, or it could just tell me which commands exist with the name. That would be fine though.
I searched here in so and on google, but well both of them just talk about the "find" command.
List of executable files or symlinks in your PATH that contain "user":
find $(echo $PATH | tr ':' ' ') -maxdepth 1 \( -type f -or -type l \) -name '*user*' -executable
sample output:
/usr/bin/users
/usr/bin/xdg-user-dir
/usr/bin/xdg-user-dirs-gtk-update
/usr/bin/users-admin
/usr/bin/xdg-user-dirs-update
/bin/fuser
/bin/fusermount
/bin/ntfs-3g.usermap
/usr/sbin/deluser
/usr/sbin/adduser
/usr/sbin/useradd
/usr/sbin/userdel
/usr/sbin/usermod
/usr/sbin/newusers
also a lot faster than wormsparty's variant (no offence :P). Result almost identical (his returns directories too, AFAIK)
compgen -c | grep -i "user"
compgen [option] [word]
Generate possible completion matches for word according to the options, which may be any option accepted by the complete builtin with the exception of -p and -r, and write the matches to the standard output.
The matches will be generated in the same way as if the programmable completion code had generated them directly from a completion specification with the same flags. If word is specified, only those completions matching word will be displayed.
...
-A action The action may be one of the following to generate a list of possible completions:
alias Alias names. May also be specified as -a.
arrayvar Array variable names.
binding Readline key binding names (see Bindable Readline Commands).
builtin Names of shell builtin commands. May also be specified as -b.
command Command names. May also be specified as -c.
directory Directory names. May also be specified as -d.
disabled Names of disabled shell builtins.
enabled Names of enabled shell builtins.
export Names of exported shell variables. May also be specified as -e.
file File names. May also be specified as -f.
function Names of shell functions.
group Group names. May also be specified as -g.
helptopic Help topics as accepted by the help builtin (see Bash Builtins).
hostname Hostnames, as taken from the file specified by the HOSTFILE shell variable (see Bash Variables).
job Job names, if job control is active. May also be specified as -j.
keyword Shell reserved words. May also be specified as -k.
running Names of running jobs, if job control is active.
service Service names. May also be specified as -s.
setopt Valid arguments for the -o option to the set builtin (see The Set Builtin).
shopt Shell option names as accepted by the shopt builtin (see Bash Builtins).
signal Signal names.
stopped Names of stopped jobs, if job control is active.
user User names. May also be specified as -u.
variable Names of all shell variables. May also be specified as -v.
You may want to check for spaces in some path and improve it with more powerful regex, but this does the trick:
#!/bin/sh
if [ $# -ne 1 ]; then
echo "Usage: $0 pattern"
exit 1
fi
for x in `echo "${PATH}" | sed 's/:/ /g'`; do
for y in $x/*; do
if [ -x "$y" ]; then
if [ `echo "$y" | grep $1 | wc -l` -ne 0 ]; then
echo "$y"
fi
fi
done
done
If you want to find all commands in a directory, on Linux you can use:
find /bin -type f -perm -o+x -name '*z*'
In this example, it will list all executables (programs) on /bin directory that had a z in their name. If you want to search in multiple directories, you can write a script and call the find in a loop, one time for each directory.
You can combine this with the previous answer to search in all directories on your path:
find $(echo $PATH | tr ':' ' ') -type f -perm -o=x -name '*z*'

bash: get list of commands starting with a given string

Is it possible to get, using Bash, a list of commands starting with a certain string?
I would like to get what is printed hitting <tab> twice after typing the start of the command and, for example, store it inside a variable.
You should be able to use the compgen command, like so:
compgen -A builtin [YOUR STRING HERE]
For example, "compgen -A builtin l" returns
let
local
logout
You can use other keywords in place of "builtin" to get other types of completion. Builtin gives you shell builtin commands. "File" gives you local filenames, etc.
Here's a list of actions (from the BASH man page for complete which uses compgen):
alias Alias names. May also be specified as -a.
arrayvar Array variable names.
binding Readline key binding names.
builtin Names of shell builtin commands. May also be specified as -b.
command Command names. May also be specified as -c.
directory Directory names. May also be specified as -d.
disabled Names of disabled shell builtins.
enabled Names of enabled shell builtins.
export Names of exported shell variables. May also be specified as -e.
file File names. May also be specified as -f.
function Names of shell functions.
group Group names. May also be specified as -g.
helptopic Help topics as accepted by the help builtin.
hostname Hostnames, as taken from the file specified by the HOSTFILE shell
variable.
job Job names, if job control is active. May also be specified as
-j.
keyword Shell reserved words. May also be specified as -k.
running Names of running jobs, if job control is active.
service Service names. May also be specified as -s.
setopt Valid arguments for the -o option to the set builtin.
shopt Shell option names as accepted by the shopt builtin.
signal Signal names.
stopped Names of stopped jobs, if job control is active.
user User names. May also be specified as -u.
variable Names of all shell variables. May also be specified as -v.
A fun way to do this is to hit M-* (Meta is usually left Alt).
As an example, type this:
$ lo
Then hit M-*:
$ loadkeys loadunimap local locale localedef locale-gen locate
lockfile-create lockfile-remove lockfile-touch logd logger login
logname logout logprof logrotate logsave look lorder losetup
You can read more about this in man 3 readline; it's a feature of the readline library.
If you want exactly how bash would complete
COMPLETIONS=$(compgen -c "$WORD")
compgen completes using the same rules bash uses when tabbing.
JacobM's answer is great. For doing it manually, i would use something like this:
echo $PATH | tr : '\n' |
while read p; do
for i in $p/mod*; do
[[ -x "$i" && -f "$i" ]] && echo $i
done
done
The test before the output makes sure only executable, regular files are shown. The above shows all commands starting with mod.
Interesting, I didn't know about compgen. Here a script I've used to do it, which doesn't check for non-executables:
#!/bin/bash
echo $PATH | tr ':' '\0' | xargs -0 ls | grep "$#" | sort
Save that script somewhere in your $PATH (I named it findcmd), chmod u+w it, and then use it just like grep, passing your favorite options and pattern:
findcmd ^foo # finds all commands beginning with foo
findcmd -i -E 'ba+r' # finds all commands matching the pattern 'ba+r', case insensitively
Just for fun, another manual variant:
find -L $(echo $PATH | tr ":" " ") -name 'pattern' -type f -perm -001 -print
where pattern specifies the file name pattern you want to use. This will miss commands that are not globally executable, but which you have permission for.
[tested on Mac OS X]
Use the -or and -and flags to build a more comprehensive version of this command:
find -L $(echo $PATH | tr ":" " ") -name 'pattern' -type f
\( \
-perm -001 -or \
\( -perm -100 -and -user $(whoami)\) \
\) -print
will pick up files you have permission for by virtue of owning them. I don't see a general way to get all those you can execute by virtue of group affiliation without a lot more coding.
Iterate over the $PATH variable and do ls beginningofword* for each directory in the path?
To get it exactly equivalent, you would need to filter out only executable files and sort by name (should be pretty easy with ls flags and the sort command).
What is listed when you hit are the binary files in your PATH that start with that string. So, if your PATH variable contains:
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/lib/java/bin:/usr/lib/java/jre/bin:/usr/lib/qt/bin:/usr/share/texmf/bin:.
Bash will look in each of those directories to show you the suggestions once you hit . Thus, to get the list of commands starting with "ls" into a variable you could do:
MYVAR=$(ls /usr/local/bin/ls* /usr/bin/ls* /bin/ls*)
Naturally you could add all the other directories I haven't.

Resources