To remove whitespace or newline for the string value fetched from the text file - linux

sample.txt file contains below values
1111
2222
3333
With above values will be frames to access the folder with names 1111, 2222, 3333
I'm referring these values using below code
for i in $(cat sample.txt); do cd folder1/$i; done
it gives error as can't cd : folder1/1111
Reason because each string values referred from the file has some suffice whitespace or new line character ..not sure exactly
I tried below commands with no success
for i in $(cat sample.txt); do i = $(echo ${i::-1 }); cd folder1/$i; done
for i in $(cat sample.txt); do i = $(echo $i | tr -d "[:space:]"); cd folder1/$i; done
for i in $(cat sample.txt); do cd folder1/$i; done

I guess you don't get the error for the first line, but starting from the second line only.
If you have successfully done
cd folder1/1111
the command
cd folder1/2222
will fail because you already are in folder1/1111.
You should cd back to the original directory or do the cd and whatever else you want to do in a subshell.
BTW: You should not read the lines with for, because this will fail if you have a line that contains a space.
See https://mywiki.wooledge.org/DontReadLinesWithFor
and https://mywiki.wooledge.org/BashFAQ/001.
This should work
while read -r dir
do
( cd "folder1/$dir" ) # ( ... ) to use a subshell
# after leaving the subshell, here we are back in the original directory
done < sample.txt

Related

Why is a part of the code inside a (False) if statement executed?

I wrote a small script which:
prints the content of a file (generated by another application) on paper with a matrix printer
prints the same line into a backup file
removes the original file.
The script runs every minute by a cronjob and works fine as long as there are files to print. If there are no files to print, it prints an empty line on the matrix printer and in the backup file. I don't understand why this happens as i implemented an if statement which checks if there is a file to print before the print command is executed. This behaviour only happens if the script is executed by the cron and not if i execute it manually with ./script.sh. What's the reason of this? and how can i solve it?
Something i noticed on the side is that if I place an echo "hi" command in the script, its printed to the matrix printer and the backup file. I expected that its printed to the console console when it has no >> something behind. How does this work?
The script:
#!/bin/bash
# Make sure the backup directory exists
if [ ! -d /home/user/backup_logprint ]
then
mkdir /home/user/backup_logprint
fi
# Print the records if there are any
date=`date +%Y-%m-%d`
filename='_logprint_backup'
printer_path="/dev/usb/lp0"
if [ `ls /tmp/ | grep logprint | wc -l` -gt 0 ]
then
for f in `ls /tmp | grep logprint`
do
echo `cat /tmp/$f` >> "/home/user/backup_logprint/$date$filename"
echo `cat /tmp/$f` >> $printer_path
rm "/tmp/$f"
done
fi
There's no need for ls or an if statement. Just use a proper glob in the for loop, and if no file match, the loop won't be entered.
#!/bin/bash
# Don't check first; just let mkdir decide if
# anything actually needs to be created.
d=/home/user/backup_logprint
mkdir -p "$d"
filename=$(date +"$d/%Y-%m-%d_logprint_backup")
printer_path="/dev/usb/lp0"
# Cause non-matching globs to expand to an empty
# sequence instead of being treated literally.
shopt -s nullglob
for f in /tmp/*logprint*; do
cat "$f" > "$printer_path" && mv "$f" "$d"
done

how to filter out / ignore specific lines when comparing text files with diff

To further clarify what I am trying to do, I wrote the script below. I am attempting to audit some files between my QA and PRD environments and would like the final Diff output to Ignore hard coded values such as sql connections. I have about 6 different values to filer. I have tried several ways thus far I am not able to get any of them to work as needed. I am open to doing this another way if anyone has any ideas. I am pretty new to script development so Im open to any ideas or information. Thanks :)
#!/bin/bash
#*********************************************************************
#
# Name: compareMD5.sh
# Date: 02/12/2018
# Script Location:
# Author: Maggie o
#
# Description: This script will pull absolute paths from a text file
# and compare the files via ssh between QA & PRD on md5sum
# output match or no match
# Then the file the non matching files will be imported to a
# tmp directory via scp
# Files will be compared locally and exclude whitespace,
# spaces, comments, and hard coded values
# NOTE: Script may take a several minutes to run
#
# Usage: Auditing QA to PRD Pass 3
# nohup ./compareMD52.sh > /output/compareMD52.out 2> /error/compareMD52.err
# checking run ps -ef | grep compareMD52*
#**********************************************************************
rm /output/no_matchMD5.txt
rm /output/filesDiffer.txt
echo "Filename | Path" > /output/matchingMD5.txt
#Remove everything below tmp directory recursivly as it was created by previous script run
rm -rf /tmp/*
for i in $(cat /input/comp_list.txt) #list of files with absolute paths output by compare script
do
export filename=$(basename "$i") #Grab just the filename
export path=$(dirname "$i") #Just the Directory
qa_md5sum=$(md5sum "$i") #Get the md5sum
qa_md5="${qa_md5sum%% *}" #remove the appended path
export tmpdir=(/tmp"$path")
# if the stat is not null then run the if, if file is exisiting
if ssh oracle#Someconnection stat $path'$filename' \> /dev/null 2\>\&1
then
prd_md5sum=$(ssh oracle#Somelocation "cd $path; find -name '$filename' -
exec md5sum {} \;")
prd_md5="${prd_md5sum%% *}" #remove the appended path
if [[ $qa_md5 == $prd_md5 ]] #Match hash as integer
then
echo $filename $path " QA Matches PRD">> /output/matchingMD5.txt
else
echo $i
echo $tmpdir
echo "Copying "$i" to "$tmpdir >> /output/no_matchMD5.txt
#Copy the file from PRD to a tmp Dir in QA, keep dir structure to avoid issues of same filename exisiting in diffrent directorys
mkdir -p $tmpdir # -p creates only if not exisiting, does not produce errors if exisiting
scp oracle#Somelocation:$i $tmpdir # get the file from Prd, Insert into tmp Directory
fi
fi
done
for x in $(cat /output/no_matchMD5.txt) #do a local comapare using diff
do
comp_filename=$(basename "$x")
#Ignore Comments, no white space, no blank lines, and only report if different but not How different
qa=(/tmp"$x")
#IN TEST
if diff -bBq -I '^#' $x $qa >/dev/null
# Fails to catch files if the Comment then the start of a line
then
echo $comp_filename " differs more then just white space, or
comment"
echo $x >> /output/filesDiffer.txt
fi
done
You can pipe the output into grep -v
Like this:
diff -bBq TEST.sh TEST2.sh | grep -v "^#"
I was able to get this figured out using this method
if diff -bBqZ -I '^#' <(grep -vE '(thing1|thing2|thing3)' $x) <(grep -vE '(thing1|thing2|thing3)' $prdfile)

Linux: Update directory structure for millions of images which are already in prefix-based folders

This is basically a follow-up to Linux: Move 1 million files into prefix-based created Folders
The original question:
I want to write a shell command to rename all of those images into the
following format:
original: filename.jpg new: /f/i/l/filename.jpg
Now, I want to take all of those files and add an additional level to the directory structure, e.g:
original: /f/i/l/filename.jpg new: /f/i/l/e/filename.jpg
Is this possible to do with command line or bash?
One way to do it is to simply loop over all the directories you already have, and in each bottom-level subdirectory create the new subdirectory and move the files:
for d in ?/?/?/; do (
cd "$d" &&
printf '%.4s\0' * | uniq -z |
xargs -0 bash -c 'for prefix do
s=${prefix:3:1}
mkdir -p "$s" && mv "$prefix"* "$s"
done' _
) done
That probably needs a bit of explanation.
The glob ?/?/?/ matches all directory paths made up of three single-character subdirectories. Because it ends with a /, everything it matches is a directory so there is no need to test.
( cd "$d" && ...; )
executes ... after cd'ing to the appropriate subdirectory. Putting that block inside ( ) causes it to be executed in a subshell, which means the scope of the cd will be restricted to the parenthesized block. That's easier and safer than putting cd .. at the end.
We then collecting the subdirectories first, by finding the unique initial strings of the files:
printf '%.4s\0' * | uniq -z | xargs -0 ...
That extracts the first four letters of each filename, nul-terminating each one, then passes this list to uniq to eliminate duplicates, providing the -z option because the input is nul-terminated, and then passes the list of unique prefixes to xargs, again using -0 to indicate that the list is nul-terminated. xargs executes a command with a list of arguments, issuing the command several times only if necessary to avoid exceeding the command-line limit. (We probably could have avoided the use of xargs but it doesn't cost that much and it's a lot safer.)
The command called with xargs is bash itself; we use the -c option to pass it a command to be executed. That command iterates over its arguments by using the for arg in syntax. Each argument is a unique prefix; we extract the fourth character from the prefix to construct the new subdirectory and then mv all files whose names start with the prefix into the newly created directory.
The _ at the end of the xargs invocation will be passed to bash (as with all the rest of the arguments); bash -c uses the first argument following the command as the $0 argument to the script, which is not part of the command line arguments iterated over by the for arg in syntax. So putting the _ there means that the argument list constructed by xargs will be precisely $1, $2, ... in the execution of the bash command.
Okay, so I've created a very crude solution:
#!/bin/bash
for file1 in *; do
if [[ -d "$file1" ]]; then
cd "$file1"
for file2 in *; do
if [[ -d "$file2" ]]; then
cd "$file2"
for file3 in *; do
if [[ -d "$file3" ]]; then
cd "$file3"
for file4 in *; do
if [[ -f "$file4" ]]; then
echo "mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;"
mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;
fi
done
cd ..
fi
done
cd ..
fi
done
cd ..
fi
done
I should warn that this is untested, as my actual structure varies slightly, but I wanted to keep the question/answer consistent with the original question for clarity.
That being said, I'm sure a much more elegant solution exists than this one.

Reading specifed file line and creating new directories from words that have been taking of that file

for file in $*
head -n 1 $file | while read folder
do
mkdir $directory $folder
done
Hello guys, I'm having problem with my script. What I want to do is: read first line from my specifed file and create new directories in my specifed directory from words that i have taken from that file.
I'm getting errors like this:
./scriptas: line 2: syntax error near unexpected token `head'
./scriptas: line 2: `head -n 1 $file | while read folder'
And my second question: how do I add a second variable from command line (putty) $directory ?
Example i have file with text:
one two three
five seven nine eleven
okey
i need script to take the first line and create directories "one" "two" "three"
You have to put do before the command in a for/while cycle.
Your code should look like something like this:
#!/bin/bash
files=$*
for file in $files
do
head -n1 "$file" | while read dname
do
mkdir $dname
done
done
as for other variables, the simple syntax is a number behind the $ sign.
so you could do
files="$1"
directory="$2"
and then run the script as
./script.sh "file1.txt file2.txt file3.txt" dir2
More complex solutions include getopts and such....
Updated the script. You can use it in this way:
script.sh "one.txt two.txt three.txt" destdir
#! /bin/bash
for files in $1
do
for i in $(head -n 1 $files)
do
if [ -z $2 ]
then
mkdir $i
else
mkdir $2/$i -p
fi
done
done

Shell script that writes a shell script

Two questions: how can I write a shell variable from this script into its child script?
Are there any easier ways to do this?
If you can't follow what I'm doing, I'm:
1) starting with a list of directories whose names will be stored as values taken by $i
2) cd'ing to every value of $i and ls'ing its contents
3) echoing its contents into a new script with the name of the directory via cat
4) using echo and cat to write a new script that contains the ls'd values of $i and sends them all to a blogging email address called $i#tumblr.com
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
cd $SPECIALPATH/$i
echo ("#/bin/sh \n read -d '' directives <<EOF \n") | cat >> $i.sh
ls | cat >> $i.sh
echo ("EOF \n for q in $directives \n do \n uuencode $q $q | sendmail $i \n done \n") | cat >> $i.sh
# NB -- I am asking the script to write the shell variable $i into the new
# script, called $i.sh, as the email address specified, in the middle of an
# echo statement... I am well aware that it doesn't work as is
chmod +x $i.sh
./$i.sh
done
You are abusing felines a lot - you should simply redirect, rather than pipe to cat which appends.
You can avoid the intermediary $i.sh file by bundling all the output that goes to the file with a single I/O redirection that pipes direct into a shell - no need for the intermediate file to clean up (you didn't show that happening) or the chmod operation.
I would have done this using braces:
{
echo "..."
ls
echo "..."
} | sh
However, when I looked at the script in that form, I realized that wasn't necessary. I've left the initial part of your script unchanged, but the loop is vastly simpler like this:
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I'm assuming the sendmail command works - it isn't the way I'd try sending email. I'd probably use mailx or something similar, and I'd avoid using uuencode too (I'd use a base-64 encoding, left to my own devices):
do uuencode $q $q | mailx -s "File $q" $i#tumblr.com
The script also uses parentheses around the cd command. It means that the cd command and what follows is run in a sub-shell, so the parent script does not change directory. In this case, with an absolute pathname for $SPECIALDIR, it would not matter much. But as a general rule, it often makes life easier if you isolate directory changes like that.
I'd probably simplify it still further for general reuse (though I'd need to add something to ensure that SPECIALPATH is set appropriately):
#/bin/sh
for i in "$#"
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I can then invoke it with:
script-name $(<list-of-dirs)
That means that without editing the script, it can be reused for any list of directories.
Intermediate step 1:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
echo "read -d '' directives <<EOF"
ls
echo "EOF"
echo "for q in $directives"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
Personally, I find it easier to read the generated script if the code that generates makes the generated script clear - using multiple echo commands. This includes indenting the code.
Intermediate Step 2:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
ls |
echo "while read q"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
I don't need to read the data into a variable in order to step through each item in the list once - simply read each line in turn. The while read mechanism is often useful for splitting up a line into multiple variables too: while read var1 var2 var3 junk will read the first field into $var1, the second into $var2, the third into $var3, and if there's anything left over, it goes into $junk. If you've generated the data accurately, there won't be any junk; but sometimes you have to deal with other people's data.
If the generated script is meant to be temporary, I would not use files. Besides, chmoding them to executable sounds unsafe. When I needed to parallel my scripting, I used a bash script to form a set of commands (in an array, split the array in two, then implode the array) to a single \n-separated string and then pass that to a new bash instance.
Basically, in bash:
for orig in "$#"
do
commands="$commands echo \"echoeing stuff here for arguments $orig\" \n"
done
echo -e $commands |bash
And a small tip: if the script doesn't need supervising, throw in a & after the piped bash to make your first script quit and do the rest of the work forked background.
If you export a variable
export VAR1=FOO
it'll be present in any child processes.
If you take a look at the init scripts, /etc/init..d/* you'll notice that many source another file full of "external" definitions. You could set up a file like that and have your child script source these files.

Resources