check owner permissions from file (multiple paths results) in bash script - linux

friends,
I have a problem with my script. I tried to find a full path of /bin/openssl and next save results to the file and read from the file paths and check permissions and owners.
But in the result file /tmp/lista.txt , I have multiple paths. How to make a script to read and check every path from file result.
Any ideas?
#!/bin/sh
module_id="AV.1.8.2.1"
echo " === $module_id module === "
#MODULE BODY
find / -wholename "*bin/openssl" -print > /tmp/list.txt
file="/tmp/list.txt"
path=$(cat /tmp/list.txt)
os=$(uname)
permission_other_good=0
if [ -e $path ]
then
if [ $os = "Linux" ]
then
gid_min=$(grep ^GID_MIN /etc/login.defs)
gid_min_value=$(echo $gid_min | cut -d " " -f2)
uid_min=$(grep ^UID_MIN /etc/login.defs)
uid_min_value=$(echo $uid_min | cut -d " " -f2)
sys_gid_max=$(grep ^SYS_GID_MAX /etc/login.defs)
sys_gid_max_value=$(echo $sys_gid_max | cut -d " " -f2)
sys_uid_max=$(grep ^SYS_UID_MAX /etc/login.defs)
sys_uid_max_value=$(echo $sys_uid_max | cut -d " " -f2)
user_uid=$(stat -c %u $path)
user=$(stat -c %U $path)
group_gid=$(stat -c %g $path)
group=$(stat -c %G $path)
permission_other=$(stat -c %A $path | cut -b 8-10)
if [ -z $gid_min_value ]
then
gid_min_value=1000
fi
if [ -z $uid_min_value ]
then
uid_min_value=1000
fi
if [ -z $sys_gid_max_value ]
then
sys_gid_max_value=$((gid_min_value-1))
fi
if [ -z $sys_uid_max_value ]
then
sys_uid_max_value=$((uid_min_value-1))
fi
fi
if [ $os = "AIX" ]
then
gid_min_value=$(cat /etc/security/.ids | cut -d " " -f4)
sys_gid_max_value=$((gid_min_value-1))
uid_min_value=$(cat /etc/security/.ids | cut -d " " -f2)
sys_uid_max_value=$((uid_min_value-1))
user_uid=$(istat $path | awk -F " " 'NR==3{print $2}' | cut -d "(" -f1)
user=$(istat $path | awk -F " " 'NR==3{print $2}' | cut -d "(" -f2 | cut -d ")" -f1)
group_gid=$(istat $path | awk -F " " 'NR==3{print $4}' | cut -d "(" -f1)
group=$(istat $path | awk -F " " 'NR==3{print $4}' | cut -d "(" -f2 | cut -d ")" -f1)
permission_other=$(istat $path | awk -F " " 'NR==2{print $2}' | cut -b 7-9)
fi
if [ $permission_other = "r-x" ] || [ $permission_other = "r--" ] || [ $permission_other = "--x" ] || [ $permission_other = "---" ]
then
permission_other_good=1
fi
if [ $user_uid -le $sys_uid_max_value ] && [ $group_gid -le $sys_gid_max_value ] && [ $permission_other_good -eq 1 ]
then
compliant="Yes"
actual_value="user = $user group = $group permission = $permission_other"
else
compliant="No"
actual_value="user = $user group = $group permission = $permission_other"
fi
else
compliant="N/A"
actual_value="File $file does not exist"
fi
# SCRIPT RESULT
echo :::$module_id:::$compliant:::$actual_value:::
echo " === End of $module_id module === "

Since you are using bash, I would advise against using temporary files because it has a lot of caveats due to the filesystem (what happens if your disk is full? what happens if the file exists? what happens if you don’t have write permissions over the base directory? etc.) and due to the limitations of storing filenames in line-based text files (though this should not be an issue for finding openssl).
If you want to parse a list of files, here is a common and safe way to do that:
find / -wholename "*bin/openssl" -print0 | while IFS= read -r -d '' path
do
# Write your code here using the 'path' variable
done
But because you are storing variables inside the for loop, they would be limited to the scope of the loop because of the pipe operation between find and while. You can circumvent this problem by using process substitution instead:
while IFS= read -r -d '' path
do
# Write your code here using the 'path' variable
done < <(find / -wholename "*bin/openssl" -print0)
This behaves essentially the same as the previous code, except that the variable scope is not limited to the loop.
PS. You will have to deal with the assignment of your variables over several openssl paths. If you just copy/paste your code inside the for loop, the value of compliant and actual_value will retain information for the last path in the loop, which is probably not what you want.

This is how in bash you read a file line by line
#!/bin/bash
inputfile="/tmp/list.txt"
while IFS= read -r line
do
echo "do your stuff with $line"
done < "$inputfile"

okey the best solution is to put the result in an array and then for
declare -a my_paths
my_paths=($(find / -wholename "*bin/openssl" -print 2>/dev/null))
for my_path in "${my_paths[#]}"; do
done

Related

Syntax error near unexpected token `fi` - Linux

i am using WSL (Ubuntu) in Windows. i used bash script.sh for the script below:
#! /bin/sh
#################LOAD FILES###################
lead_SNPs=`grep "lead_SNPs" ../prep/files.txt | cut -f2`
bfile=`grep -w "bfile" ../prep/files.txt | cut -f2`
bfile_list=`grep -w "bfile_list" ../prep/files.txt | cut -f2`
r2=`grep "r2" ../prep/parameters.txt | cut -f2`
###############LD###########################
if [ ${bfile} = "NA" ]; then
cat ${bfile_list} | while read line; do
file=${line}
file_n=`echo $file |awk -F '/' '{print $NF}'`
echo 'Calculating LD'
plink --bfile ${file} --r2 --ld-window-kb 1000 --ld-window 999999 --ld-window-r2 ${r2} --ld-snp-list ${lead_SNPs} --out C:/Users/naghm/Desktop/FDSP-github/ld/${file_n}
done
else
file=${bfile}
file_n=`echo $file |awk -F '/' '{print $NF}'`
echo ${file_n}
plink --bfile ${file} --r2 --ld-window-kb 1000 --ld-window 999999 --ld-window-r2 ${r2} --ld-snp-list ${lead_SNPs} --out C:/Users/naghm/Desktop/FDSP-github/ld/${file_n}
fi
but i get this error
Syntax error near unexpected token `fi`
can you correct my code please? i can not understand where i made mistake.
Try to change:
if [ ${bfile} = "NA" ]; then
to
if [ "${bfile}" = "NA" ]; then
I suspect ${bfile} is empty which expanded to:
if [ = "NA" ]; then
in your original line.
The first line in your script doesn't look right.
It should be
#!/bin/sh
or if using bash shell:
#!/bin/bash

How to run iterations asynchronously in shell script

I have a few .csv files like below.
xyz0900#1#-1637746436.csv
xxx0900#1#-1637746436.csv
zzz0900#2#-1637746439.csv
yyy0900#1#-1637746436.csv
sss0900#2#-1637746439.csv
I have written a script to perform below tasks:
Get the large file based on the pattern which we have passed as a argument to the script.
Merge all other files which are having same pattern and create a new file
Remove duplicate header from new file.
Move new file to the destination based on the parameter passed as a argument.
Example: I am passing "1637746436#home/dest1,1637746436#home/dest2" as
a second argument to the script. Below script will fetch the
pattern(1637746436). Get the bigger file and merge all other
files(having same pattern) with it. New file will be get created and same will be moved to the destination(home/dest1).
The below script will perform the pattern matching and execution sequentially.
How to make 'for loop iteration' should be executed parallelly? I mean pattern matching of "1637746436#home/dest1,1637746436#home/dest2" should be performed simultaneously(not one after another).
Please help on this.
$merge.sh /home/dummy/17 "1637746436#home/dest1,1637746439#home/dest2"
#!/bin/bash
current=`pwd`
source=$1
destination=$2
echo "$destination" | tr "," "\n" > $current/out.txt
cat out.txt | cut -d "#" -f1 > $current/pattern.txt
for var in `cat pattern.txt`
do
getBiggerfile=$(ls -Sl $source/*$var.csv | head -1)
cd $source
getFileName=$(echo $getBiggerfile | cut -d " " -f9-)
newFileName=$(echo $getFileName | cut -d "#" -f1)
cat *$var.csv >> $getFileName
header=$(head -n 1 $getFileName)
(printf "%s\n" "$header";
grep -vFxe "$header" $getFileName
) > $newFileName.csv
rm -rf *$var.csv
cd $current
for var1 in `cat out.txt`
do
target=`echo $var1 | cut -d "#" -f2`
id=$(echo $var1 | cut -c-10)
if [ $id = $var ]
then
mv $newFileName.csv $target
fi
done
done
The cleanest would be to make the internals of the loop a function, and call the function inside the loop, putting it in the background (child processes), then wait for the background (child) processes to finish:
function do_the_thing(){
source="$1"
current="$2"
var="$3"
getBiggerfile=$(ls -Sl $source/*$var.csv | head -1)
cd $source
getFileName=$(echo $getBiggerfile | cut -d " " -f9-)
newFileName=$(echo $getFileName | cut -d "#" -f1)
cat *$var.csv >> $getFileName
header=$(head -n 1 $getFileName)
(printf "%s\n" "$header";
grep -vFxe "$header" $getFileName
) > $newFileName.csv
rm -rf *$var.csv
cd $current
for var1 in `cat out.txt`
do
target=`echo $var1 | cut -d "#" -f2`
id=$(echo $var1 | cut -c-10)
if [ $id = $var ]
then
mv $newFileName.csv $target
fi
done
}
for var in `cat pattern.txt`
do
do_the_thing "$source" "$current" "$var" &
done
wait

loop multiple file with only 1st file being read

So I have a that works as it going to mulitple directories then within these directories takes multiple fields from files and store it in a .txt files.
There are two loop, the first one that loops through all the folder
the second one that loops through all the files.
The problem I encounter is in the second loops that it read only the first file in the folder and then it moves on to the next folder and ignore all other files in the folder.
archive=/imdata/archive
inventory_archive=/imdata/a/shares/b/inventory/c
ls $archive | while read p; do
echo "Project: $p"
mkdir -v $inventory_archive/$p
dir=$inventory_archive/$p
ls -1 $archive/$p/d001 | while read s; do
echo "Searching Session: $s ..."
find $archive/$p/d001/$s -type f -iname "*.txt" | while read f; do
echo "FILE: $f"
study=`/home/me/program/bin/script $f | grep -m1 "field1" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID: $study"
if [ ! -d "$dir/$study" ]; then
mkdir -v $dir/$study
fi
studydir=$dir/$study
series=`/home/me/program/bin/script $f | grep -m1 "field2" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID_2: $series"
if [ ! -a "$studydir/$series.txt" ]; then
touch $studydir/$series.txt
fi
sop=`/home/me/program/bin/script $f | grep -m1 "field3" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID_3: $sop"
grep -qsF $sop $studydir/$series.txt || echo $sop >> $studydir/$series.txt
exit 1;
done;
done;
done;

How to grep files and redirect output to a CSV in Linux?

I have got the following directory structure:
/releases/customer1/filesNeeded/file1.xml
/releases/customer1/filesNeeded/subfolder/file2.xml
/releases/customer2/filesNeeded/file1.xml
/releases/customer2/filesNeeded/subfolder/file2.xml
And so on.
In file1.xml I have got -parameter1 value -parameter2 value and so on. I need to create a script that loops through each customer, and first ask if the 'filesNeeded' folder is present. In case it is, I need to grep each parameter value and output these values to a single CSV file.
Any ideas?
Folder structure as you mentioned, content for each file as shown :
/releases/customer1/filesNeeded/file1.xml
sdshsjsd -parameter1 value1 jksdj -parameter2 value2 jdskjdks
sdshsjsd -parameter20 value20 jksdj -parameter21 value21 jdskjdks
sdshsjsd -parameter22 value22 jksdj -parameter23 value23 jdskjdks
sdshsjsd -parameter24 value24 jksdj -parameter25 value25 jdskjdks
/releases/customer1/filesNeeded/subfolder/file2.xml
lskdlskd -parameter6 value6 ljl
/releases/customer2/filesNeeded/file1.xml
sdkhdsh -parameter12 value12 jksdj -parameter15 value15
/releases/customer2/filesNeeded/subfolder/file2.xml
kdsjkjsd -parameter17 value17
kdsjkjsd -parameter100 value100 -parameter value
kdsjkjsd -parameter170 value170
Script:
#!/bin/bash
i=1
while true;
do
# checking if customerN directory present #
if [ -d customer$i ]
then
# Cleanup for previously present csv file, if any #
if [ -f customer$i.csv ]
then
rm customer$i.csv
fi
echo ""
echo "Checking in customer$i folder";
if [ -d customer$i/filesNeeded ]
then
j=1
num_files=`find customer$i/filesNeeded -type f -name "*.xml" | wc -l`
echo " Number of valid files in customer$i/filesNeeded directory = $num_files"
while [ $j -le $num_files ]
do
file_name=`find customer$i/filesNeeded -type f -name "*.xml" | head -n $j | tail -n 1`
echo " Processing file : $file_name"
cat $file_name | tr -s " " | sed -r 's/-parameter[0-9]+/\n/g' | grep "^ " | cut -d " " -f 2 >> tmp_customer$i.csv
j=$[ $j + 1 ]
done
fi
else
break;
fi
cat tmp_customer$i.csv | tr '\n' ',' > customer$i.csv
rm tmp_customer$i.csv
i=$[ $i + 1 ]
done
Output Files Generated:
customer1.csv
value1,value2,value20,value21,value22,value23,value24,value25,value6,
customer2.csv
value12,value15,value17,value100,value170,
Hope this helps!

Shell program doesn't work

For multiple folders provided as input from the user, I'd like to count how many of the files and folders they contain have different permission settings as the container folder itself.
I've written the following shell code. Why does it display the rights, but not count anything?
#!/bin/sh
if [ ! -d $1 ]
then echo $1 nu este director
exit1
fi
ls -R $1 >temp
permission= ls -al $1 | cut -d" " -f1
for i in `cat temp`
do
perm= ls -l $i | cut -d" " -f1
if [ $permission -ne $perm ]
then n=`expr $n + 1`
fi
echo $n
done
You shouldn't use -ne for string comparisons. You need to do this:
if [ "$permission" != "$perm" ]
then
n=`expr $n + 1`
fi
You need to initialise n before you can increment it.
n=0
You need to fix your command substitution:
permission=$(ls -al $1 | cut -d" " -f1)
perm=$(ls -l $i | cut -d" " -f1)
exit1 should be exit 1
you want to use command substitution:
permission=$(ls -al $1 | cut -d" " -f1)
# ...
perm=$(ls -l $i | cut -d" " -f1)
You are not initializing your variable $n, so your call to expr expands to expr + 1 which is a syntax error. You should see lots of "expr: syntax error" messages on stderr. Just add the line n=0 before your loop and you should be fine.
Adding to other's answers:
exit1 should be exit 1

Resources