Write a bash script that lists all files and subdirectories - linux

I want to write a easy script shell like that:
#!/bin/bash
from_directory="first_directory"
to_directory="second_directory"
rsync --archive $(from_directory) $(to_directory) | ls -R $(to_directory)/$(from_directory)
OR
cp -r $(from_directory) $(to_directory) | ls -R $(to_directory)/$(from_directory)
I have this error ==> ls: impossible to reach in / home / jilambo / week2 / shooter_game: no file or directory of this type.
In the second time, it's ok because the first_directory have been copied to the segond directory.
Thanks.

As pointed out in comments, you probaly want this.
#!/bin/bash
from_directory="first_directory"
to_directory="second_directory"
rsync --archive $from_directory $to_directory; ls -R $to_directory/$from_directory
And if $from_directory and $to_directory are both absolute paths, $to_directory/$from_directory does not make sense. Might as well just do ls -R $to_directory.

Related

Why does the file in a subdirectory not get copied?

I have to make a script which will copy all files from a certain location starting with "db." to another location. My script works fine for all files which are directly in the directory, but it doesnt copy any files which are in subdirectorys. I have used the parameter -r, which should copy everything recursivly or not? Why isnt it working and how can I make it work?
My script:
#! /bin/bash
#Script zum kopieren aller Dateien welche mit "db." beginnen.
#User input
echo -n 'Enter path to copy from: '
read copypath
echo -n 'Enter path to save to: '
read savepath
cp -r $copypath/db.* $savepath
echo 'Done.
Making an answer out of my comment...
try $copypath/db.* followed by $copypath/**/db.*
The first one is for the top level directory (copypath) and the next for any of the subdirectories.
-r does not work here because you don't supply any source directories to cp.
Before cp is executed bash expands the * and gives the resulting file list to cp. cp then only sees something like cp -r 1stFile 2ndFile 3rdFile ... targetDirectory -- therefore -r has no effect.
As pointed out in the comments, you have to use bash's globstar feature ** or find. Also, you should make a habit of quoting your variables.
# requires bash 4.0 or higher (from the year 2009, but OS X has a really outdated version)
shopt -s globstar
cp "$copypath"/**/db.* "$savepath"
or
find "$copypath" -type f -name 'db.*' -exec cp -t "$savepath" {} +

scp multiple files with different names from source and destination

I am trying to scp multiple files from source to destination.The scenario is the source file name is different from the destination file
Here is the SCP Command i am trying to do
scp /u07/retail/Bundle_de.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_de.properties
Basically i do have more than 7 files which i am trying seperate scps to achieve it. So i want to club it to a single scp to transfer all the files
Few of the scp commands i am trying here -
$ scp /u07/retail/Bundle_de.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_de.properties
$ scp /u07/retail/Bundle_as.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_as.properties
$ scp /u07/retail/Bundle_pt.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_pt.properties
$ scp /u07/retail/Bundle_op.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_op.properties
I am looking for a solution by which i can achieve the above 4 files in a single scp command.
Looks like a straightforward loop in any standard POSIX shell:
for i in de as pt op
do scp "/u07/retail/Bundle_$i.properties" "rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_$i.properties"
done
Alternatively, you could give the files new names locally (copy, link, or move), and then transfer them with a wildcard:
dir=$(mktemp -d)
for i in de as pt op
do cp "/u07/retail/Bundle_$i.properties" "$dir/MultiSolutionBundle_$i.properties"
done
scp "$dir"/* "rgbu_fc#<fc_host>:/u01/projects/"
rm -rf "$dir"
With GNU tar, ssh and bash:
tar -C /u07/retail/ -c Bundle_{de,as,pt,op}.properties | ssh user#remote_host tar -C /u01/projects/ --transform 's/.*/MultiSolution\&/' --show-transformed-names -xv
If you want to use globbing (*) with filenames:
cd /u07/retail/ && tar -c Bundle_*.properties | ssh user#remote_host tar -C /u01/projects/ --transform 's/.*/MultiSolution\&/' --show-transformed-names -xv
-C: change to directory
-c: create a new archive
Bundle_{de,as,pt,op}.properties: bash is expanding this to Bundle_de.properties Bundle_as.properties Bundle_pt.properties Bundle_op.properties before executing tar command
--transform 's/.*/MultiSolution\&/': prepend MultiSolution to filenames
--show-transformed-names: show filenames after transformation
-xv: extract files and verbosely list files processed

creating a new directory each time a shell script is run on linux

I'm trying to create a shell script that copies .log files from one directory to a new directory, with a new name each time the script is run.
For example lets say there's File1.log, File2.log, File3.log in /home/usr/logs
and when this script runs, I want them to be copied to a new location /home/usr/savedlogs/Run1 and the next time it runs.../home/usr/savedlog/Run2 and so on...
I'm not sure if this would be used:
cp /home/usr/logs/{File1.log,File2.log,File3.log} /home/usr/savedlogs
I'm hoping this is possible in a shell script. Thank you all for your help in advance, greatly appreciated!
Here is a simple script that might suffice your requirement:
#!/bin/bash
# Get the number of Run* directories present
newnum=$(ls -ld /home/usr/savedlogs/Run* 2>/dev/null | wc -l)
mkdir -p /home/usr/savedlogs/Run${newnum}
cp /home/usr/logs/*.log /home/usr/savedlogs/Run${newnum}
This will start from Run0 and proceeds from there
If you do not care about incrementing directory names, you can do this with a simple timestamp:
DIR=$(date +%Y%m%d%H%M%S)
mkdir $DIR
cp /home/usr/logs/FileXXX.log /home/usr/savedlogs/$DIR/
This will work as long as your copy operation happens less than once a second.
You may try with newDir=$(ls -l /home/usr/savedlogs | grep Run | wc -l) for getting the existing number of directories.
So the whole script will look like this :
newDir=$(ls -l /home/usr/savedlogs | grep Run | wc -l)
mkdir -p /home/usr/savedlogs/Run${newDir}
cp /home/usr/logs/*.log /home/usr/savedlogs/Run${newDir}
First folder will be Run0, next Run1 and so on...

Why doesn't the cd command work when trying to pipe it with another command

I'm trying to use a pipeline with cd and ls like this:
ls -l | cd /home/user/someDir
but nothing happens.
And I tried:
cd /home/user/someDir | ls -l
It seems that the cd command does nothing while the ls -l will work on the current directory.
The directory that I'm trying to open is valid.
Why is that? Is it possible to pipe commands with cd / ls?
Didn't have any problem with other commands while using pipe.
cd takes no input and produces no output; as such, it does not make sense to use it in pipes (which take output from the left command and pass it as input to the right command).
Are you looking for ls -l ; cd /somewhere?
Another option (if you need to list the target directory) is:
cd /somewhere && ls -l
The '&&' here will prevent executing the second command (ls -l) if the target directory does not exist.

How to check if "s" permission bit is set on Linux shell? or Perl?

I am writing some scripts to check if the "s" permission bit is set for a particular file.
For example - permissions for my file are as follows-
drwxr-s---
How can I check in bash script or a perl script if that s bit is set or not?
If you're using perl, then have a look at perldoc:
-u File has setuid bit set.
-g File has setgid bit set.
-k File has sticky bit set.
So something like:
if (-u $filename) { ... }
non-perl options
Using stat
#!/bin/bash
check_file="/tmp/foo.bar";
touch "$check_file";
chmod g+s "$check_file";
if stat -L -c "%A" "$check_file" | cut -c7 | grep -E '^S$' > /dev/null; then
echo "File $check_file has setgid."
fi
Explanation:
Use stat to print the file permissions.
We know the group-execute permission is character number 7 so we extract that with cut
We use grep to check if the result is S (indicated setgid) and if so we do whatever we want with that file that has setgid.
Using find
I have found (hah hah) that find is quite useful for the purpose of finding stuff based on permissions.
find . -perm -g=s -exec echo chmod g-s "{}" \;
Finds all files/directories with setgid and unsets it.

Resources