Remove text from a txt file that is in an array - linux

So I have these two functions: cMinOption and cMaxOption.
cMinOption puts all words from pt.stop_words.txt into myArray.
cMaxOption shows a list of words from ficha01.pdf.txt, including on that same list of words from pt.stop_words.txt.
I want to know how can I delete the words from the array I created and delete those words from the list that appears in the function cMaxOption.
cMinOption(){
declare -a myArray
mapfile -t myArray < pt.stop_words.txt
sed -i 's/myArray/""/g' ficha01.pdf.txt
sed -e 's/[^[:alpha:]]/ /g' ficha01.pdf.txt | tr '\n' " " | tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl | head -n 7
for ((i=0; i<204; i++)); do
echo "Element [$i]: ${myArray[$i]}";
done
}
# Called when -C is passed as argument
cMaxOption(){
# Check if the file passed in $1 is a PDF file
if [ $(head -c 4 "$1") = "%PDF" ]; then
pdftotext $1 $1.txt
echo "'$1': PDF file";
file="$1.txt"
else
echo "'$1': TXT file";
file="$1"
fi
echo "[INFO] processing '$file'"
echo "STOP WORDS will be counted"
echo "COUNT MODE"
sed -e 's/[^[:alpha:]]/ /g' $file | tr '\n' " " | tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl | head -n 7
mv $file result---$file
echo "RESULTS: 'result---$file' "
ls -l result---$file
echo -e " $(sort -u result---$file | wc -l) distinct words"
}

Related

How to use cat and EOF in gnuplot to create a file while executing the gnuplot script

I want to make a file, for.sh, in the gnuplot script during its execution. The new file then will be called in the gnuplot script as
set xtics (`(sh for.sh)`)
The contents in for.sh should be these.
#!/bin/bash
rm Final-X-ticks.dat
KPATH=data1
TICK_position=data2
sed -n '/crystal coordinates with respect/,/REPRODUCED(TRANSFORMED) DATASET/p' $KPATH | sed '/crystal/d' | sed '/REPRODUCED/d' | awk '{print $4}' | sed -r '/^\s*$/d' > symbl.dat
line=$(cat symbl.dat | wc -l)
for i in `seq 1 1 $line`
do
echo " '{/Times-New " > symbl-$i.dat
cat symbl.dat | tail -n $line | head -n "$i" | tail -n 1 >> symbl-$i.dat
echo " }'" >> symbl-$i.dat
grep -i "coordinate" $TICK_position | awk '{print $NF}' | head -n "$i" | tail -n 1 >> kpoints-$i.dat
cat kpoints-$i.dat | tail -n 1 | awk 'BEGIN { ORS = " " } { print }' >> symbl-$i.dat
echo " ," >> symbl-$i.dat
cat symbl-$i.dat | awk 'BEGIN { ORS = " " } { print }' | awk '{gsub(/Times-New[ \t]+(G|\$\\Gamma\$)/, "Symbol G")} 1' | awk 'BEGIN { ORS = " " } { print }' >> Final-X-ticks.dat
done
rm symbl-* symbl.dat kpoints-*
I tried with
cat > for.sh <<EOF
above contents
EOF
but it gives me error
cat for.sh << EOF
^
"plot.gnu", line 19: invalid command
You can use a datablock to hold the text, then print it out. For example,
$MYSCRIPT <<EOF
#!/bin/bash
rm Final-X-ticks.dat
...
rm symbl-* symbl.dat kpoints-*
EOF
set print "for.sh"
print $MYSCRIPT
set print
set xtics (`(bash ./for.sh)`)

Can I use hexdump in a shell script?

Can I use hexdump in a shell script?
When I use it I keep getting an error .
syntax error near unexpected token 'hexdump'
#!/bin/bash
#bash-hexdump
# Quick script to check delay of the shotpoints
echo " please enter the complete line name as mentioned in the RAID2 "
read $line
cd /argus/raid2/"$line"
echo
echo " Entering the directory "
echo
for file in /argus/raid2/"$line"/*.ffid
hexdump -e "16 \"%_p\" \"\\n\"" $FFID | sed -n '68,73p' > list1
done
for filename in 'cat list1'
do
sed -n 6p | awk '{print $1}' = $wd
cat list.txt | sed -n 1p | cut -c13-14 = $hh
cat list.txt | sed -n 1p | cut -c15-16 = $mm
cat list.txt | sed -n 2p | cut -c1-2 = $ss
done
while [ true ]
do
$FFID=`ls -1rt $1 | grep -i ffid | tail -1`
echo " FFID value is : $FFID"
while [ $FFID = `ls -1rt $1 | grep -i ffid | tail -1` ]
do
hexdump -e "16 \"%_p\" \"\\n\"" $FFID | sed -n '68,73p' > list
done
for filename in 'cat list'
do
cat list.txt | sed -n 1p | cut -c13-14 = $hh1
cat list.txt | sed -n 1p | cut -c15-16 = $mm1
cat list.txt | sed -n 2p | cut -c1-2 = $ss1
done
$time1 = "$hh"":""$mm"":""$ss" ;
$time2 = "$hh1"":""$mm1"":""$ss1" ;
$former_seconds = $(date --date= "$time1" +%s);
$later_seconds = $(date --date= "$time2" +%s);
$time_delay = ( "$later_seconds" - "$former_seconds" )
$wb_time = ( "$wd" * 1.33 )
if
(("$wb_time" + "$time_delay")) < 12.0
then
echo "please slow down"
fi
if [ -e EOL.ffid ]
then
echo "EOL.ffid detected, exiting script"
exit
fi
done
I am not able to figure out why the hexdump code is giving me an error . Please help .
You are missing the do in your for loop:
for file in /argus/raid2/"$line"/*.ffid
do
hexdump -e "16 \"%_p\" \"\\n\"" $FFID | sed -n '68,73p' > list1
done

grep text in a loop for multiple file but only grep the first

Hi I have a bunch of files which can be read in a text format and I am grep key feature within the text files. I have multiple folders with multiple files, each folder is its own set.
a.file
b.file
c.file
d.file
etc...
within the files, I there are similar feature that I used to group them which I grep the similar trait and using that as a new text file name then I store the unqiue trait inside the text file
similarA.txt include uniqueA and uniqueB
similarB.txt include uniqueC and uniqueD
So the problem I have with my script right now is that my script is only taking the first file and the first file's unique ID and then it moves on to the next folder which it does not grep from every file in the folder.
ls $parent_folder | while read p; do
dir=$new_folder/$p
ls -1 $parent_folder/$p/individual_folder | while read s; do
find $parent_folder/$p/individual_folder/$s -type f -iname "*.dcm" | while read f; do
if [[ $f == *.gz ]]; then
cp $f $new_folder
target_file="`ls -1 $new_folder | grep ".gz" | head -1`"
gunzip $new_folder/$target_file
target_file="`ls -1 $new_folder | grep ".dcm" | head -1`"
folder_name=`/usr/local/bin/file2txt_command $new_folder/$target_file | grep -m1 "0020,000d" | cut -d "[" -f2 | cut -d "]" -f1`
folder_namedir=$dir/$folder_name
common_trait=`/usr/local/bin/file2txt_command $new_folder/$target_file | grep -m1 "0020,000e" | cut -d "[" -f2 | cut -d "]" -f1`
if [ ! -a "$folder_namedir/$common_trait.txt" ]; then
touch $folder_namedir/$common_trait.txt
fi
unqiue_trait=`/usr/local/bin/file2txt_command $new_folder/$target_file | grep -m1 "0008,0018" | cut -d "[" -f2 | cut -d "]" -f1`
grep -qsF $unqiue_trait $folder_namedir/$common_trait.txt || echo $unqiue_trait >> $folder_namedir/$common_trait.txt
rm $new_folder/$target_file
else
folder_name=`/usr/local/bin/dcm4che-3.3.3/bin/file2txt_command $f | grep -m1 "0020,000d" | cut -d "[" -f2 | cut -d "]" -f1`
folder_namedir=$dir/$folder_name
common_trait=`/usr/local/bin/file2txt_command $f | grep -m1 "0020,000e" | cut -d "[" -f2 | cut -d "]" -f1`
if [ ! -a "$folder_namedir/$common_trait.txt" ]; then
touch $folder_namedir/$common_trait.txt
fi
unqiue_trait=`/usr/local/bin/file2txt_command $f | grep -m1 "0008,0018" | cut -d "[" -f2 | cut -d "]" -f1`
grep -qsF $unqiue_trait $folder_namedir/$common_trait.txt || echo $unqiue_trait >> $folder_namedir/$common_trait.txt
fi
exit
done
done
done

loop multiple file with only 1st file being read

So I have a that works as it going to mulitple directories then within these directories takes multiple fields from files and store it in a .txt files.
There are two loop, the first one that loops through all the folder
the second one that loops through all the files.
The problem I encounter is in the second loops that it read only the first file in the folder and then it moves on to the next folder and ignore all other files in the folder.
archive=/imdata/archive
inventory_archive=/imdata/a/shares/b/inventory/c
ls $archive | while read p; do
echo "Project: $p"
mkdir -v $inventory_archive/$p
dir=$inventory_archive/$p
ls -1 $archive/$p/d001 | while read s; do
echo "Searching Session: $s ..."
find $archive/$p/d001/$s -type f -iname "*.txt" | while read f; do
echo "FILE: $f"
study=`/home/me/program/bin/script $f | grep -m1 "field1" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID: $study"
if [ ! -d "$dir/$study" ]; then
mkdir -v $dir/$study
fi
studydir=$dir/$study
series=`/home/me/program/bin/script $f | grep -m1 "field2" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID_2: $series"
if [ ! -a "$studydir/$series.txt" ]; then
touch $studydir/$series.txt
fi
sop=`/home/me/program/bin/script $f | grep -m1 "field3" | cut -d "[" -f2 | cut -d "]" -f1`
echo "SID_3: $sop"
grep -qsF $sop $studydir/$series.txt || echo $sop >> $studydir/$series.txt
exit 1;
done;
done;
done;

Deleting specific symbols inside file

I have a problem. I have a file in this format:
Hi / Tom /
Be / Nice /
...
And I need to delete "/" and " "(space) and sort it
Be Nice
Hi Tom
...
for sorted_word in $(for word in $(sed -e 's/\/ //g' path_to_file); do printf "%s\n" ${word}; done | sort); do printf "%s " ${sorted_word}; done ; printf "%s\n"
you can use tr command like
cat inputfile.txt | tr -s "\/" "" | tr " " "\n" | sort | tr "\n" " "

Resources