Getting individual file name from a list of file names separated by space in Linux shell - linux

I have file names which I get from git diff command in my shell script
and it's output looks like this :
bewsdfsdf.txt xcvxcv.txt # separated by space
My script:
python3 "unitest.py"
COMMIT_ID=$(git rev-parse --verify HEAD)
echo ${COMMIT_ID}
Files=$(git diff-tree --no-commit-id --name-only -r ${COMMIT_ID})
touch names.txt
echo ${Files} > names.txt
cat names.txt
I want to get each file name separately. How can I do it?

Redirect the output of this command git diff-tree --no-commit-id --name-only -r ${COMMIT_ID} to xargs utility.
Using the -I argument, you can get each file name separately and execute a command for each file name (if replace-str is set to %)
For example : If you want to echo each file name,
git diff-tree --no-commit-id --name-only -r ${COMMIT_ID} | xargs -I % echo %

Related

lftp delete multiples files with Bash

I try to create a script who delete all the olds files except the three more recent files on my backup directory with lftp.
I have try to do this with ls -1tr who return all the files in ascending date order, and after I do a head -$NB_BACKUP_TO_RM ($NB_BACKUP_TO_RM is the numbers of files that I want to delete in my lists), this two commands return the correct files.
After this I want to remove all of them, so I do a xargs rm --, but Bash returns that the files don't exist... I think this command is not running into the remote directory, but in the local directory, and I don't know what I can do for delete this files (of my return lists).
Here is the full code:
MAX_BACKUP=3
NB_BACKUP=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | wc -l ; quit" -u $USER,$PASSWORD $HOST)
if (( $NB_BACKUP > $MAX_BACKUP ))
then
NB_BACKUP_TO_RM=$(($NB_BACKUP-$MAX_BACKUP))
REMOVE=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xargs rm -- ; quit" -u $USER,$PASSWORD $HOST)
echo $REMOVE
fi
Have you an idea of the problem? How can I delete the files of my lists (after ls -1tr $REMOTE_DIR/full_backup_ftp* and head -$NB_BACKUP_TO_RM)
Thanks for your help
Starting SFTP connection can be time consuming. Slightly modified solution to avoid multiple lftp sessions below. It will perform much better the the alternative solution, especially if large number of files have to be purged.
Basically, leveraging lftp flexibility to mix lftp command with external commands. It creates a command file with a series of 'rm' (leveraging head ,xargs, ...), and executing those commands INSIDE the same lftp session.
Also note that lftp 'ls' does not allow wildcard, use 'cls' instead
Make sure you test this carefully, because of potential removal of important files
lftp -e $USER,$PASSWORD $HOST <<__CMD__
cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {} > rm_list.txt
source rm_list.txt
__CMD__
Or with one liner, using 'lftp' ability to execute dynamically generated command (source -e). It eliminate the temporary file.
lftp -e $USER,$PASSWORD $HOST <<__CMD__
source -e 'cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {}'
__CMD__
Looks xargs is unknown cmd for lftp after man lftp. And xargs rm is deleting local files not remote files.
so please use xargs as below, it works for me.
lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp*; quit" -u $USER,$PASSWORD $HOST | head -$NB_BACKUP_TO_RM | xargs -I {} lftp -e 'rm '{}'; quit' -u $USER,$PASSWORD $HOST

Commands work on terminal but not in shell script

The following commands work on my terminal but not in my shell script. I later found out that my terminal was /bin/tcsh. Can somebody tell me what changes I need to do for /bin/sh. Here are the commands I need to change:
cp source_dir/*/dir1/*.xml destination_dir/
Error in sh-> cp: cannot stat `source_dir/*/dir1/*.xml': No such file or directory
sed -i "s+${initial_name}+${final_name}+" $file_name
This one does not complain but does not work as well.
I am adding an example for testing. The code tends to rename the names of xml files and also the contents of xml files. For example-
The file name crr.ya.na.aa.xml should be changed to aa.xml
The same name inside crr.ya.na.aa.xml should also be changed from crr.ya.na.aa to aa
Here is the code:
#!/bin/sh
# Create dir structure for testing
rm -rf audience
mkdir audience
mkdir audience/dir1 audience/dir2 audience/dir3
mkdir audience/dir1/ipxact audience/dir2/ipxact audience/dir3/ipxact
touch audience/dir1/ipxact/crr.ya.na.aa.xml
echo "<spirit:name>crr.ya.na.aa</spirit:name>" > audience/dir1/ipxact/crr.ya.na.aa.xml
touch audience/dir2/ipxact/crr.ya.na.bb.xml
echo "<spirit:name>crr.ya.na.bb</spirit:name>" > audience/dir2/ipxact/crr.ya.na.bb.xml
touch audience/dir3/ipxact/crr.ya.na.cc.xml
echo "<spirit:name>crr.ya.na.cc</spirit:name>" > audience/dir3/ipxact/crr.ya.na.cc.xml
# Create a dir for ipxact_drop files if it does not exist
mkdir -p ipxact_drop
rm -rf ipxact_drop/*
cp audience/*/ipxact/*.xml ipxact_drop/
ls ipxact_drop/ > ipxact_drop_files.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > file_names.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print "mv ipxact_drop/" $0 " ipxact_drop/" a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > command.log
chmod +x command.log
./command.log
while read line
do
echo ipxact_drop/$line
initial_name=`grep -m 1 crr ipxact_drop/$line | sed -e 's/<spirit:name>//' | sed -e 's/<\/spirit:name>//' `
final_name="${line%.*}"
echo $initial_name
echo $final_name
sed -i "s+${initial_name}+${final_name}+" ipxact_drop/$line
done < file_names.log
echo " ***** SCRIPT RUN FINISHED *****"
Only the sed command at the end is not working
I was reading some other posts and understood that xml files can have problems with scripts. Here is what that worked for me upto now.
To remove cp error: replace #!/bin/sh -f with #!/bin/sh
To remove sed error for the test input: replace sed -i ...... with sed -i.back ....

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Can't add a file separated with space to git

I have been writing a script to add untracked files using git add .
The loop I use in my script is
for FILE in $(git ls-files -o --exclude-standard); do
git add $FILE
git commit -m "Added $FILE"
git push origin master
done
The script runs fine till it faces a filename which has space in it. for Eg., I cant add the file Hello 22.mp4.(Note that there is a SPACE between Hello and 22). The above loop would take the file as 2 separate files, Hello and 22.mp4 and exit with error.
Does someone know how to add it as a single file?
Thanks
What's happening is the shell is expanding the $(...) into a bunch of words, and it's obviously interpreting a file with spaces embedded as multiple files obviously. Even with the prior suggestions of quoting the git add command, it wouldn't work. So the loop is getting run with wrong arguments, as shown by this output with set -x:
ubuntu#up:~/test$ ls -1
a a
ubuntu#up:~/test$ set -x; for FILE in $(git ls-files -o --exclude-standard); do git add "$FILE"; git commit -m "Added $FILE"; done
+ set -x
++ git ls-files -o --exclude-standard
+ for FILE in '$(git ls-files -o --exclude-standard)'
+ git add a
...
The proper solution is to quote the git add $file and have git ls-files NULL separate the filenames by passing -z to git ls-files and use a while loop with a null delimiter:
git ls-files -o --exclude-standard -z | while read -r -d '' file; do
git add "$file"
git commit -m "Added $file"
git push origin master
done
If you are using bash alternative to the solution provided by #AndrewF, you can make use of IFS bash internal variable to change the delimiter from space to newline, something on these lines:
(IFS=$'\n'
for FILE in $(git ls-files -o --exclude-standard); do
git add $FILE
git commit -m "Added $FILE"
git push origin master
done
)
This is just for your information. The response of AndrewF is more informative covering debugging option & usage of while instead of for.
Hope this helps!
Try putting the $FILE var in quotes:
git add "$FILE"
That'll quote the filename, thus allowing spaces in it.
Replace git add $FILE with git add "$FILE". That way it will be interpreted as a single element.
I know that this is very late but here is one way to do it using the standard xargs linux command:
git ls-files -o --exclude-standard | xargs -L 1 -I{} -d '\n' git add '{}'
You can test it by simply echoing the command as follows:
git ls-files -o --exclude-standard | xargs -L 1 -I{} -d '\n' echo "git add '{}'"
To add as a single file add a backslash before the space in the filename:
git add pathtofilename/filenamewith\ space.txt

Retaining file permissions with Git

I want to version control my web server as described in Version control for my web server, by creating a git repo out of my /var/www directory. My hope was that I would then be able to push web content from our dev server to github, pull it to our production server, and spend the rest of the day at the pool.
Apparently a kink in my plan is that Git won't respect file permissions (I haven't tried it, only reading about it now.) I guess this makes sense in that different boxes are liable to have different user/group setups. But if I wanted to force permissions to propagate, knowing my servers are configured the same, do I have any options? Or is there an easier way to approach what I'm trying to do?
Git is Version Control System, created for software development, so from the whole set of modes and permissions it stores only executable bit (for ordinary files) and symlink bit. If you want to store full permissions, you need third party tool, like git-cache-meta (mentioned by VonC), or Metastore (used by etckeeper). Or you can use IsiSetup, which IIRC uses git as backend.
See Interfaces, frontends, and tools page on Git Wiki.
The git-cache-meta mentioned in SO question "git - how to recover the file permissions git thinks the file should be?" (and the git FAQ) is the more staightforward approach.
The idea is to store in a .git_cache_meta file the permissions of the files and directories.
It is a separate file not versioned directly in the Git repo.
That is why the usage for it is:
$ git bundle create mybundle.bdl master; git-cache-meta --store
$ scp mybundle.bdl .git_cache_meta machine2:
#then on machine2:
$ git init; git pull mybundle.bdl master; git-cache-meta --apply
So you:
bundle your repo and save the associated file permissions.
copy those two files on the remote server
restore the repo there, and apply the permission
This is quite late but might help some others. I do what you want to do by adding two git hooks to my repository.
.git/hooks/pre-commit:
#!/bin/bash
#
# A hook script called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if it wants
# to stop the commit.
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
# Clear the permissions database file
> $DATABASE
echo -n "Backing-up permissions..."
IFS_OLD=$IFS; IFS=$'\n'
for FILE in `git ls-files --full-name`
do
# Save the permissions of all the files in the index
echo $FILE";"`stat -c "%a;%U;%G" $FILE` >> $DATABASE
done
for DIRECTORY in `git ls-files --full-name | xargs -n 1 dirname | uniq`
do
# Save the permissions of all the directories in the index
echo $DIRECTORY";"`stat -c "%a;%U;%G" $DIRECTORY` >> $DATABASE
done
IFS=$IFS_OLD
# Add the permissions database file to the index
git add $DATABASE -f
echo "OK"
.git/hooks/post-checkout:
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
echo -n "Restoring permissions..."
IFS_OLD=$IFS; IFS=$'\n'
while read -r LINE || [[ -n "$LINE" ]];
do
ITEM=`echo $LINE | cut -d ";" -f 1`
PERMISSIONS=`echo $LINE | cut -d ";" -f 2`
USER=`echo $LINE | cut -d ";" -f 3`
GROUP=`echo $LINE | cut -d ";" -f 4`
# Set the file/directory permissions
chmod $PERMISSIONS $ITEM
# Set the file/directory owner and groups
chown $USER:$GROUP $ITEM
done < $DATABASE
IFS=$IFS_OLD
echo "OK"
exit 0
The first hook is called when you "commit" and will read the ownership and permissions for all the files in the repository and store them in a file in the root of the repository called .permissions and then add the .permissions file to the commit.
The second hook is called when you "checkout" and will go through the list of files in the .permissions file and restore the ownership and permissions of those files.
You might need to do the commit and checkout using sudo.
Make sure the pre-commit and post-checkout scripts have execution permission.
We can improve on the other answers by changing the format of the .permissions file to be executable chmod statements, and to make use of the -printf parameter to find. Here is the simpler .git/hooks/pre-commit file:
#!/usr/bin/env bash
echo -n "Backing-up file permissions... "
cd "$(git rev-parse --show-toplevel)"
find . -printf 'chmod %m "%p"\n' > .permissions
git add .permissions
echo done.
...and here is the simplified .git/hooks/post-checkout file:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
. .permissions
echo "done."
Remember that other tools might have already configured these scripts, so you may need to merge them together. For example, here's a post-checkout script that also includes the git-lfs commands:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
. .permissions
echo "done."
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on you
r path. If you no longer wish to use Git LFS, remove this hook by deleting .git/hooks/post-checkout.\n"; exit 2; }
git lfs post-checkout "$#"
In case you are coming into this right now, I've just been through it today and can summarize where this stands. If you did not try this yet, some details here might help.
I think #Omid Ariyan's approach is the best way. Add the pre-commit and post-checkout scripts. DON'T forget to name them exactly the way Omid does and DON'T forget to make them executable. If you forget either of those, they have no effect and you run "git commit" over and over wondering why nothing happens :) Also, if you cut and paste out of the web browser, be careful that the quotation marks and ticks are not altered.
If you run the pre-commit script once (by running a git commit), then the file .permissions will be created. You can add it to the repository and I think it is unnecessary to add it over and over at the end of the pre-commit script. But it does not hurt, I think (hope).
There are a few little issues about the directory name and the existence of spaces in the file names in Omid's scripts. The spaces were a problem here and I had some trouble with the IFS fix. For the record, this pre-commit script did work correctly for me:
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
# Clear the permissions database file
> $DATABASE
echo -n "Backing-up file permissions..."
IFSold=$IFS
IFS=$'\n'
for FILE in `git ls-files`
do
# Save the permissions of all the files in the index
echo $FILE";"`stat -c "%a;%U;%G" $FILE` >> $DATABASE
done
IFS=${IFSold}
# Add the permissions database file to the index
git add $DATABASE
echo "OK"
Now, what do we get out of this?
The .permissions file is in the top level of the git repo. It has one line per file, here is the top of my example:
$ cat .permissions
.gitignore;660;pauljohn;pauljohn
05.WhatToReport/05.WhatToReport.doc;664;pauljohn;pauljohn
05.WhatToReport/05.WhatToReport.pdf;664;pauljohn;pauljohn
As you can see, we have
filepath;perms;owner;group
In the comments about this approach, one of the posters complains that it only works with same username, and that is technically true, but it is very easy to fix it. Note the post-checkout script has 2 action pieces,
# Set the file permissions
chmod $PERMISSIONS $FILE
# Set the file owner and groups
chown $USER:$GROUP $FILE
So I am only keeping the first one, that's all I need. My user name on the Web server is indeed different, but more importantly you can't run chown unless you are root. Can run "chgrp", however. It is plain enough how to put that to use.
In the first answer in this post, the one that is most widely accepted, the suggestion is so use git-cache-meta, a script that is doing the same work that the pre/post hook scripts here are doing (parsing output from git ls-files). These scripts are easier for me to understand, the git-cache-meta code is rather more elaborate. It is possible to keep git-cache-meta in the path and write pre-commit and post-checkout scripts that would use it.
Spaces in file names are a problem with both of Omid's scripts. In the post-checkout script, you'll know you have the spaces in file names if you see errors like this
$ git checkout -- upload.sh
Restoring file permissions...chmod: cannot access '04.StartingValuesInLISREL/Open': No such file or directory
chmod: cannot access 'Notebook.onetoc2': No such file or directory
chown: cannot access '04.StartingValuesInLISREL/Open': No such file or directory
chown: cannot access 'Notebook.onetoc2': No such file or directory
I'm checking on solutions for that. Here's something that seems to work, but I've only tested in one case
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
echo -n "Restoring file permissions..."
IFSold=${IFS}
IFS=$
while read -r LINE || [[ -n "$LINE" ]];
do
FILE=`echo $LINE | cut -d ";" -f 1`
PERMISSIONS=`echo $LINE | cut -d ";" -f 2`
USER=`echo $LINE | cut -d ";" -f 3`
GROUP=`echo $LINE | cut -d ";" -f 4`
# Set the file permissions
chmod $PERMISSIONS $FILE
# Set the file owner and groups
chown $USER:$GROUP $FILE
done < $DATABASE
IFS=${IFSold}
echo "OK"
exit 0
Since the permissions information is one line at a time, I set IFS to $, so only line breaks are seen as new things.
I read that it is VERY IMPORTANT to set the IFS environment variable back the way it was! You can see why a shell session might go badly if you leave $ as the only separator.
In pre-commit/post-checkout an option would be to use "mtree" (FreeBSD), or "fmtree" (Ubuntu) utility which "compares a file hierarchy against a specification, creates a specification for a file hierarchy, or modifies a specification."
The default set are flags, gid, link, mode, nlink, size, time, type, and uid. This can be fitted to the specific purpose with -k switch.
I am running on FreeBSD 11.1, the freebsd jail virtualization concept makes the operating system optimal. The current version of Git I am using is 2.15.1, I also prefer to run everything on shell scripts. With that in mind I modified the suggestions above as followed:
git push: .git/hooks/pre-commit
#! /bin/sh -
#
# A hook script called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if it wants
# to stop the commit.
SELF_DIR=$(git rev-parse --show-toplevel);
DATABASE=$SELF_DIR/.permissions;
# Clear the permissions database file
> $DATABASE;
printf "Backing-up file permissions...\n";
OLDIFS=$IFS;
IFS=$'\n';
for FILE in $(git ls-files);
do
# Save the permissions of all the files in the index
printf "%s;%s\n" $FILE $(stat -f "%Lp;%u;%g" $FILE) >> $DATABASE;
done
IFS=$OLDIFS;
# Add the permissions database file to the index
git add $DATABASE;
printf "OK\n";
git pull: .git/hooks/post-merge
#! /bin/sh -
SELF_DIR=$(git rev-parse --show-toplevel);
DATABASE=$SELF_DIR/.permissions;
printf "Restoring file permissions...\n";
OLDIFS=$IFS;
IFS=$'\n';
while read -r LINE || [ -n "$LINE" ];
do
FILE=$(printf "%s" $LINE | cut -d ";" -f 1);
PERMISSIONS=$(printf "%s" $LINE | cut -d ";" -f 2);
USER=$(printf "%s" $LINE | cut -d ";" -f 3);
GROUP=$(printf "%s" $LINE | cut -d ";" -f 4);
# Set the file permissions
chmod $PERMISSIONS $FILE;
# Set the file owner and groups
chown $USER:$GROUP $FILE;
done < $DATABASE
IFS=$OLDIFS
pritnf "OK\n";
exit 0;
If for some reason you need to recreate the script the .permissions file output should have the following format:
.gitignore;644;0;0
For a .gitignore file with 644 permissions given to root:wheel
Notice I had to make a few changes to the stat options.
Enjoy,
One addition to #Omid Ariyan's answer is permissions on directories. Add this after the for loop's done in his pre-commit script.
for DIR in $(find ./ -mindepth 1 -type d -not -path "./.git" -not -path "./.git/*" | sed 's#^\./##')
do
# Save the permissions of all the files in the index
echo $DIR";"`stat -c "%a;%U;%G" $DIR` >> $DATABASE
done
This will save directory permissions as well.
Another option is git-store-meta. As the author described in this superuser answer:
git-store-meta is a perl script which integrates the nice features of git-cache-meta, metastore, setgitperms, and mtimestore.
Improved version of https://stackoverflow.com/users/9932792/tammer-saleh answer:
It only updates the permissions on changed files.
It handles symlinks
It ignores empty directories (git can not handle them)
.git/hooks/pre-commit:
#!/usr/bin/env bash
echo -n "Backing-up file permissions... "
cd "$(git rev-parse --show-toplevel)"
find . -type d ! -empty -printf 'X="%p"; chmod %m "$X"; chown %U:%G "$X"\n' > .permissions
find . -type f -printf 'X="%p"; chmod %m "$X"; chown %U:%G "$X"\n' >> .permissions
find . -type l -printf 'chown -h %U:%G "%p"\n' >> .permissions
git add .permissions
echo done.
.git/hooks/post-merge:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
git diff -U0 .permissions | grep '^\+' | grep -Ev '^\+\+\+' | cut -c 2- | /usr/bin/bash
echo "done."

Resources