shell script to read directory names and create .txt files with the same names in another directory - linux

I have two directories, one called clients and another called test, inside the directory called clients I have some folders, I need a shell script that reads the name of the folders inside clients and creates .txt files with the same name inside the folder test, I am very new to shell and I have no idea how to do this, could you guys help me please?

Try using xargs with ls. ls -F displays all files in the directory client, but then displays the folders with an extra / at the end. the grep uses the extra / in the output of ls -F to only pass folders to the next command. Then, sed 's/\///g removes the extra / from grep, and passes the names to xargs. xargs will then pass the folders to the % symbol, and then make text files with the names.
ls -F client | grep / | sed 's/\///g' | xargs -I % touch tests/%.txt

Related

List all folder and subfolder inside it where folder names start with a* or b* or c* with path

I need a folder and subfolder inside it to be displayed where names that start with A* or B* or C* and display along with path
Below Command Does not Display as expected
$ ls -l | egrep d
You can display the current directory by using the system environment variable PWD. You can combine the PWD with your ls command
using ls -ld
ls -ld $PWD/A* $PWD/B* $PWD/C*
EDIT
If you want a list of all the directories and sub directories you can use the find command.
find . > subfolders.txt && cat subfolders.txt | egrep -i "^./E|^./g"
This command will recursively list all contents on your current working directory and send the output to a txt file named subfolders.txt. Then it will read the contents of subfolders.txt and using egrep, you can filter out anything that starts with "./E" or "./g". the -i option means it is case insensitive.
NOTE: This will also display the files contained in those subfolders.
find . | grep -E '/A|/B|/C'
find is better than ls for your requirements.

How to get a folder name in linux bash from a directory

There will be directory which will have any number folder and may be files, I just need pick one random folder and need to process it ( move the folders , etc ..) I need process folder one by one. Need to ignore if there is any files.
I am tiring with below code able to get folder name but , seems there some hidden character or some thing which not giving proper output.
PROCESSING_FOLDER_NAME= ls -l /ecom/bin/catalogUpload/input/TNF-EU/ | grep '^d' | cut -d ' ' -f23 | head -1
#PROCESSING_FOLDER_NAME= echo $PROCESSING_FOLDER_NAME | tr -d '\n\r'
#PROCESSING_FOLDER_NAME=${PROCESSING_FOLDER_NAME%$'\n'}
#echo "PROCESSING_FOLDER_NAME is/$PROCESSING_FOLDER_NAME "
echo "/ecom/bin/catalogUpload/input/TNF-EU/$PROCESSING_FOLDER_NAME/"
output
Thanks_giving_Dec_08
/ecom/bin/catalogUpload/input/TNF-EU//
I am expecting the output should be /ecom/bin/catalogUpload/input/TNF-EU/Thanks_giving_Dec_08/
Here is my bash version.
GNU bash, version 4.2.50(1)-release (powerpc-ibm-aix6.1.2.0)
I mainly need the folder name (not full path) in variable, As the folder name which is processing need be use for emails to notify other, etc.
To get a random folder from a list of folders,
first put the list of folders in an array:
list=(/ecom/bin/catalogUpload/input/TNF-EU/*/)
Next, get a random index using the $RANDOM variable of the shell,
modulo the size of the list:
((index = RANDOM % ${#list[#]}))
Print the value at the selected index:
echo "${list[index]}"
To get just the name of the directory without the full path, you can use the basename command:
basename "${list[index]}"
As for what's wrong with the original script:
To store the result of a command in a variable, the syntax is name=$(cmd) instead of name= cmd
Do not parse the output of ls, it's not reliable
To get directories in a directory, you can use glob patterns like * ending with /, as in the above example */.

Command line bash for entering multiple directories and executing a command

I'm new to this site (and to programming, more or less), but I'm hoping you can help.
I have numerous directories named 3K, 4K, 5K, etc. Within each directory I have 12 subdirectories named v1 to v12, each containing a file called OUTCAR. I am trying to write a bash command that will allow me to enter each of the subdirectories and gather data from OUTCAR.
The function works with no issues when I enter each subdirectory individually.
I'm using
for file in v{1..12} ; do grep "key_string" OUTCAR | awk '{print(relevant_stuff)}' > output.dat ; done
From the *K fine that contains the v{1..12} subdirectories.
However, I'm getting an error telling me that OUTCAR doesn't exist for each v{1..12}. I know it does, so I'm guessing that I haven't properly directed the command to cd into each subdirectory first. Any tips?
Thanks!
You would be better of using this find command from top level directory where these sub directories exist:
find . -type d -name 'v[1-9][[1-9]' \
-exec awk '/key_string/ {print FILENAME ":" $0}' {}/* >> output.dat \;

rsync to backup one file generated in dynamic folders

I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip

Launching program several times

I am using Mac Os. This is command line code to lauch my programm (two parts)
nucmer --mum file1.txt file2.txt
show-snps -Clr -x 2 out.delta > out_file1.snps
First part of the programm creates file out.delta. My file2.txt is always the same, but I want to launch this both parts 35000 times whith different file1.txt. All the file1s are located in the same directory.
Is it possible to do it using BASH?
Keep all the input files in a directory. Create a wrapper script to invoke nucmer script and then show-snps script. Your wrapper script will accept path to file directory as input. Iterate over all files in the directory and call your two scripts.
You could do something along these lines:
find . -maxdepth 1 -type f -print | grep -v './out_' | while read f
do
b=$(basename ${f})
nucmer --mum ${f} file2.txt
show-snps -Clr -x 2 out.delta > out_${b}.snps
done
The find bit finds all files in the current directory. grep filters out any previous output files, in case you've run some previously. The basename line strips off the leading ./ and trailing extension, and then your two programs get run with the input file name and an output filename based on the basename output.
If you don't get an argument list too long error, you could just use for:
for f in file*.txt; do nucmer --mum $f second.txt; show-snps -Clr -x 2 out.delta > out_${f%.txt}.snps; done

Resources