CentOS directory structure as tree? - linux

Is there an equivalent to tree on CentOS?

If tree is not installed on your Centos system (I typically recommend server setups to use minimal install disk anyhow) you should type the following at your command line:
# yum install tree -y
If this doesn't install it's because you don't have the proper repository. I would use the Dag Wieers repository:
http://dag.wieers.com/rpm/FAQ.php#B
After that you can do your install:
# yum install tree -y
Now you're ready to roll. Always read the man page: http://linux.die.net/man/1/tree
So quite simply the following will return a tree:
# tree
Alternatively you can output this to a text file. There's a ton of options too.. Again, read your man page if you're looking for something other than default output.
# tree > recursive_directory_list.txt
(^^ in a text file for later review ^^)

You can make your own primitive "tree" ( for fun :) )
#!/bin/bash
# only if you have bash 4 in your CentOS system
shopt -s globstar
for file in **/*
do
slash=${file//[^\/]}
case "${#slash}" in
0) echo "|-- ${file}";;
1) echo "| |-- ${file}";;
2) echo "| | |-- ${file}";;
esac
done

As you can see here. tree is not installed by default in CentOs, so you'll need to look for an RPM and install it manually

Since tree is not installed by default in CentOS ...
[user#CentOS test]$ tree
-bash: tree: command not found
[user#CentOS test]$
You can also use the following ls command to produce almost similar output with tree
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
Example:
[user#CentOS test]$ ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
.
|-directory1
|-directory2
|-directory3
[user#CentOS directory]$

You have tree in the base repo.
Show it (yum list package-name):
# yum list tree
Available Packages
tree.i386 1.5.0-4 base
Install it:
yum install tree
(verified on CentOS 5 and 6)

I need to work on a remote computer that won't allow me to yum install. So I modified bash-o-logist's answer to get a more flexible one.
It takes an (optional) argument that is the maximum level of subdirectories you want to show. Add it to your $PATH, and enjoy a tree command that doesn't need installation.
I am not an expert in shell (I had to Google a ton of times just for this very short script). So if I did anything wrong, please let me know. Thank you so much!
#!/bin/bash
# only if you have bash 4 in your CentOS system
shopt -s globstar # enable double star
max_level=${1:-10}
for file in **
do
# Get just the folder or filename
IFS='/'
read -ra ADDR <<< "$file"
last_field=${ADDR[-1]}
IFS=' '
# Get the number of slashes
slash=${file//[^\/]}
# print folder or file with correct number of leadings
if [ ${#slash} -lt $max_level ]
then
spaces=" "
leading=""
if [ "${#slash}" -gt 0 ]
then
leading=`eval $(echo printf '"|${spaces}%0.s"' {1..${#slash}})`
fi
echo "${leading}|-- $last_field"
fi
done

Related

How to match two strings from separate lists - Posix Shell

I have a POSIX shell script that tries to find manual install paths for tomcat on an ubuntu (20.04) system. I run dpkg-query to make a list of installs that were installed using the package manager. Then I iterate over a list of common install paths. If any of the installs found using dpkg-query match, I need to exclude those from my discovery. But, I can't seem to figure out an efficient way to do this.
Code:
#/bin/sh
APP="Apache Tomcat"
# Common install paths to iterate over
set -- "/opt/" "/usr/share/" "/var/" "/var/lib/" "/usr/" "/usr/local/"
# Create exclusion list for managed installs from apt and yum
EXCLUSION_LIST=""
if [ -x "$(command -v dpkg-query)" ]; then
dpkg_check=$(dpkg-query -l | grep -Eo 'tomcat[0-9]?' | sort -u)
EXCLUSION_LIST=$dpkg_check
echo $EXCLUSION_LIST
# Check for manual installs
for _i in "$#"; do
find_tomcat=$(ls $_i | grep -E '^tomcat[0-9]{0,2}$')
excluded_list=$(echo $EXCLUSION_LIST)
for match in $excluded_list; do
if [ $find_tomcat == $match ]; then
echo "$match is excluded"
else
echo $_i$find_tomcat
done
done
else
echo "DPKG not found."
fi
Desired Output:
Install path of manual version not installed by the package manager.
E.g., /usr/share/tomcat
Observations:
The first line needs to start literally with the two characters #! to be a valid shebang
You are needlessly copying the same strings to multiple variables, in one place with a useless echo
Don't use upper case for your private variables; see Correct Bash and shell script variable capitalization
When to wrap quotes around a shell variable?
Don't parse ls output. Instead, I propose looping over wildcard matches in this case.
Your current logic seems to be basically saying, "if tomcat8 was installed by dpkg, don't print any tomcat8 paths." I'm guessing you actually want to avoid printing paths which are installed by a dpkg-installed tomcat* package, and print others.
Your dpkg-query -l command is slightly inexact, in that it will find not just package names, but also package descriptions which contain the search string. You can use dpkg-query -W instead, to only print package names and versions.
dpkg -L prints the actual paths installed by a set of packages.
Untested, but hopefully at least a step in the right direction.
#!/bin/sh
# Unused variable
# APP="Apache Tomcat"
if [ -x "$(command -v dpkg-query)" ]; then
exclusion_list=$(dpkg-query -W | sed -n 's/^\(tomcat[0-9]*\)\t[^\t].*/\1/p' | sort -u)
echo "$exclusion_list"
# Check for manual installs
for _i in "/opt" "/usr/share" "/var" "/var/lib" "/usr" "/usr/local"; do
for found in "$_i"/tomcat*; do
case $found in
*/tomcat | */tomcat[0-9] | */tomcat[0-9][0-9] ) ;;
*) continue;;
esac
found_dir=${found%/*}
if dpkg -L $exclusion_list | fgrep -qx "$found_dir"
then
echo "$match is excluded"
else
echo "$found"
fi
done
done
else
echo "DPKG not found."
fi

What is "the regular file" on CentOS and Ubuntu?

My environment is:
CentOS 6.9
Ubuntu 16.04 LTS
The GNU coreutils 8.4 has the test command to check the file using -f option.
man test shows
-f FILE
FILE exists and is a regular file
The definition of the "regular file" is ambiguous for me.
On the terminal, I did
$ touch afile
$ ln -fs afile LN-FILE
Then, I executed the following script (check_file_exist_180320_exec)
#!/usr/bin/env bash
if [ -e LN-file ]
then
echo "file exists [-e]"
fi
if [ -f LN-file ]
then
echo "file exists [-f]"
fi
For CentOS and Ubuntu, both show -e and -f for symbolic linked file (LN-FILE).
However, ls -l returns l:symbolik link (not -:regular file) identifiers for the LN-FILE file.
( see. https://linuxconfig.org/identifying-file-types-in-linux)
On the other hand, I found following,
Difference between if -e and if -f
A regular file is something that isn't a directory / symlink / socket / device, etc.
answered Apr 18 '12 at 7:10
jman
What is the reference I should check for the "regular file" (for CentOS and Ubuntu)?
Note the documentation further down in man test
Except for -h and -L, all FILE-related tests dereference symbolic
links.
Basically when you do -f LN-file , and LN-file is a symbolic link, the -f test will follow that symlink, and give you the result of what the symlink points to.
If you want to check if a file is a symlink or a regular file, you need to do e.g.
if [ -h LN-file ]
then
echo "file is a symlink [-h]"
elif [ -f LN-file ]
then
echo "file is a regular file [-f]"
fi
A regular file is a file that isn't a binary file neither a symbolic link.
It use to be associated to a plain text file.

Allow sh to be run from anywhere

I have been monitoring the performance of my Linux server with ioping (had some performance degradation last year). For this purpose I created a simple script:
echo $(date) | tee -a ../sb-output.log | tee -a ../iotest.txt
./ioping -c 10 . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing I\/O\: /" | tee -a ../iotest.txt
./ioping -RD . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing seek rate\: /" | tee -a ../iotest.txt
etc
The script calls ioping in the folder /home/bench/ioping-0.6. Then it saves the output in readable form in /home/bench/iotest.txt. It also adds the date so I can compare points in time.
Unfortunately I am no experienced programmer and this version of the script only works if you first enter the right directory (/home/bench/ioping-0.6).
I would like to call this script from anywhere. For example by calling
sh /home/bench/ioping.sh
Googling this and reading about path variables was a bit over my head. I kept up ending up with different version of
line 3: ./ioping: No such file or directory
Any thoughts on how to upgrade my scripts so that it works anywhere?
The trick is the shell's $0 variable. This is set to the path of the script.
#!/bin/sh
set -x
cd $(dirname $0)
pwd
cd ${0%/*}
pwd
If dirname isn't available for some reason, like some limited busybox distributions, you can try using shell parameter expansion tricks like the second one in my example.
Isn't it obvious? ioping is not in . so you can't use ./ioping.
Easiest solution is to set PATH to include the directory where ioping is. perhaps more robust - figure out the path to $0 and use that path as the location for ioping (assing your script sits next to ioping).
If iopinf itself depend on being ruin in a certain directory, you might have to make your script cd to the ioping directory before running.

lpr command not working with CYGWIN.

#!/bin/bash
while :
do
if [ -e ./*.pdf ]
then
#printer=$(lpstat -p | grep printer | head -n1 | cut -d \ -f 2)
printer=$(cat printer.ini)
for file in *.pdf
do
echo "Printing $file"
$(lpr -P $printer $file)
echo "Moving $file"
$(mv $file ./p)
done
fi
done
when I'm trying to run this script in windows using CYGWIN it is showing..lpr is not a internal or external command..Please give me a solution for this.
Cygwin has modules (packages). A limited amount of them are installed by default, you need to choose the ones you need by running the setup again and selecting them. lpr is in cygutils, iirc. Also, you seem to be running this in a windows command prompt instead of sh or mintty, etc. (error is specific to cmd.exe afaik).

How to create an offline repository for debian non-free?

I am using debian squeeze and want to create an offline repository or a cd/dvd for the debian non-free branch. I looked around the internet and all i found out is that there are neither iso images nor there are jidgo files for creating such image so I had the idea to fetch the packages from one of the debian package servers using:
wget -r --no-parent -nH -A*all.deb,*any.deb,*i386.deb \
ftp://debian.oregonstate.edu/debian/pool/non-free/
I know that that I must use file: in my */etc/apt/sources.list* to indicate local repositories but how do I actually create one so that apt or aptitude understands this?
(Answered in a questioned edit. Converted to a community wiki answer. See What is the appropriate action when the answer to a question is added to the question itself? )
The OP wrote:
Update: With a few ugly tricks I was able to extract the needed data from pool and the dist folder.
I used the unzipped Package.gz to do this:
grep '^Package\:.*' Packages|awk '{print $2}' >> Names.lst
grep '^Version\:.*' Packages|awk '{print $2}' >> Versions.lst
grep '^Architecture\:.*' Packages|awk '{print $2}' >> Arch.lst
With vim I find and remove the ':' in the file Versions.lst and generate a shorter Content.lst more easy to parse with bash tools:
paste Names.lst Versions.lst Arch.lst >> Content.lst
Now I do this:
cat content.lst | while read line; \
do echo "$(echo $line|awk '{print $1}')\
_$(echo $line|awk '{print $2}')_$(echo $line|awk '{print $3}')";\
done >> content.lst.tmp && mv content.lst.tmp content.lst
which generates me the file names in the debian directory I need. When finishing with my downloads using wget I find and rsync the needed file names. mv does not work here because I needed the structure as it is referring to in Packages.gz:
cat content.lst |while read line; \
do find debian/ -type f -name ${line}.deb -exec \
rsync -rtpog -cisR {} debian2/ \; ;done
rm -r debian && mv debian2 debian
To receive the complete dists tree structure I used wget again:
wget -c -r --no-parent -nH -A*.bz2,*.gz,Release \
ftp://debian.oregonstate.edu/debian/dists/squeeze/non-free/binary-i386/
I think the only thing I have to do now is to create the Contents.gz file.
The Contents.gz file can easily be created using the apt-ftparchive program:
apt-ftparchive contents > Contents-i386 && gzip -f Contents-i386

Resources