list directory entries in the svn repository? - linux

How can I write a bash script to list directory entries in the svn repository?
I want to write bash file because i have a large number of repositories.

If you are the subversion administrator, the following command will return the directories located in your repository.
svnlook tree $REPO_DIR --full-paths | egrep "/$"
The trick is the grep command that is looking for a trailing "/" character in the name
Same trick works for the svn command as well
svn list $REPO_URL -R | egrep "/$"
Extra notes
To repeatedly run this command you can put it into a shell for loop
for url in $URL1 $URL2 $URL2
do
svn list $url -R | egrep "/$"
done

Related

How can I delete the oldest n group of files with the same prefix?

In Linux I use InfluxDB which can make a backup of the database for archival purposes. Each backup comprises a series of files with the same prefix "/tank/Backups/var/Influxdb/20191225T235655Z." and different extensions.
I wanted to write a bash script which first deletes the oldest existing backups, then creates a new one (here I paste only the removal):
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' | \
sort -ru | sed 's/$/.*/' | tail -n +4 | xargs -d '\n' -r rm --
However, when I run the script as "sudo", I get
rm: cannot remove '/tank/Backups/var/Influxdb/20191225T235655Z.*': No such file or directory
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct. Also, if I manually write
sudo /tank/Backups/var/Influxdb/20190930T215357Z.*
the command succeeds.
Why is the script reporting an error?
I'm using Ubuntu 18.04 and the folder "/tank" is a ZFS volume.
Better do :
find /tank/Backups/var/Influxdb/* -mtime +5 -delete
to remove files older than 5 days.
Then, you can run the next command
Explaining the Error
This answer is only here to explain the error and give a deeper understanding of what is happening. If you are simply looking for an elegant solution search for other answers.
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct
The listed strings are not what you want. When you pass these paths to rm it sees them just as literal strings, that is, two files whose names end with a literal *. Since you don't have such files you get an error.
When you type rm * manually into your console bash (not rm!) does globbing. bash searches files and replaces the * with the list of found files. Only after that bash executes rm foundFile1 foundFile2 .... rm never sees the *.
Strings inside a pipeline are not processed by bash, but by the commands in the pipeline, in your case rm. rm does not glob.
You could run bash inside your pipeline and let it expand the * you inserted earlier. To this end, replace the last command in your pipeline with xargs -r bash -c 'rm -- $*' --. However, note that your paths are not quoted here. If there are spaces or literal * in your filenames the command will break. This is necessary for globbing as quoted "*" are not expanded by bash.
To quote your files you have to insert the * glob inside the bash command:
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' |
sort -ru | tail -n +4 | xargs -d\\n -L1 -r bash -c 'rm -- "$0."*'
Above command is only a simple fix for your command. It is neither elegant nor very robust. Using tools like find is strongly recommended.

Script to open latest text file from a directory

I need a shell script to open latest text file from a given directory. it will be then copied to another directory. How can i achieve it?
I need a logic which will search and give the latest file from a directory (name of the text file can be anything (not fixed), so i need to find out latest text file)
Here you can do something like this
#!/bin/sh
SOURCE_DIR=/home/juned/Downloads
DEST_DIR=/tmp/
LAST_MODIFIED_FILE=`ls -t ${SOURCE_DIR}| head -1`
echo $LAST_MODIFIED_FILE
#Open file
vim $SOURCE_DIR/$LAST_MODIFIED_FILE
#Copy file
cp $SOURCE_DIR/$LAST_MODIFIED_FILE $DEST_DIR
echo "File copied successfully"
You can specify any application name in which you want to open that file like gedit, kate etc. Here I've used vim.
xdg-open - opens a file or URL in the user's preferred application
Not an expert in bash but you can try this logic:
First, grab the latest file using ls -t -t sorts by time head -1 gets the first file
F=`ls -t * | head -1`
Then open the file using and editor:
xdg-open $F
gedit $F
...
As suggested by # AJefferiss you can directly do :
xdg-open $(ls -t * | head -1)
gedit $(ls -t * | head -1)
For editing the latest modified / created,
vim $(ls -t | head -1)
For editing the latest in alphanumerical order,
vim $(ls -1 | tail -1)
In one line (if are you sure that there are only files):
vim `ls -t .|head -1`
it will be opened in vim (or use other txt editor)
if there are directories you should write script with loop and test every file (if it's not a dir):
if [ -f $FILE ];
or you can also use find, or use pipe for get latest file:
ls -lt .|sed -n 2p|grep -v '^d'
The existing answers are helpful, but fall short when it comes to dealing with filenames with embedded spaces or other shell metacharacters.[1]
# Get the most recently modified *.txt file.
# (On *assignment*, names with spaces, ... are not a concern.)
f=$(ls -t *.txt | head -n 1)
# *Use* the variable enclosed in *double-quotes* to ensure that it is passed
# to the target command unmodified.
xdg-open "$f" # could also use "$(ls -t *.txt | head -n 1)" directly
Additionally, some answer user all-uppercase shell variable names, which should be avoided so as to avoid conflicts with environment variables.
[1] Due to use of ls, filenames with embedded newlines won't be handled correctly, but that's rarely a real-world concern.

Diff between two remote folders through SSH

Does anyone know how to list the files that exists in one remote folder and not in another remote folder. I have two servers (say Server1 and Server2) with similar folder structure where I'm doing Rsync. However, the destination folder has more files than the source as some of the files were deleted. Now I'm trying to find a way to find which files are new in Server2 by using diff between Server 1 and Server 2.
I can take the diff between two local folders directly using the following command:
diff /home/www/images/test_images /var/www/site/images/test_images
But I was wondering if it is possible to diff folders between two remote servers using ssh. Like this?
diff ubuntu1#images.server1.com:/home/www/images/test_images ubuntu2#images.server2.com:/var/www/site/images/test_images
Say the ssh configurations of Server 1 and Server 2 are as follows:
Server 1
IP: images.server1.com
User: ubuntu1
Password: pa$$word1
Images Path: /home/www/images/test_images
Server 2
IP: images.server2.com
User: ubuntu2
Password: pa$$word2
Images Path: /var/www/site/images/test_images
Hoping for any help to solve this problem. Thanks.
Try this command:
diff -B <(sshpass -p 'pa$$word1' ssh ubuntu1#images.server1.com "find /home/www/images/test_images -type f | sed 's/\/home\/www\/images\/test_images\///g'" | sort -n) <(sshpass -p 'pa$$word2' ssh ubuntu2#images.server2.com "find /var/www/site/images/test_images -type f | sed 's/\/var\/www\/site\/images\/test_images\///g'" | sort -n) | grep ">" | awk '{print $2}'
Explanation:
You can use diff -B <() <() for taking the diff between two streams. The command first uses sshpass to ssh into the two servers without having to enter your passwords interactively.
Each parameter for diff -B uses find command to recursively list all your images in the specified directory and uses sed to remove the root path of the files (because they are different for two servers - and to make it work for the diff command); and the sort command to sort them.
Since the output of the diff command returns either > or <, grep is used to filter out only the diffs from your Server 2. Last, awk prints out only the second column (removes the > column from the output).
NOTE: You need to install sshpass first. Use apt-get to install it as follows:
sudo apt-get install sshpass
You can extend this by piping other commands like rm. Hope this works for you.

rsync to backup one file generated in dynamic folders

I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip

shell script to download latest file from FTP

I am writing shell script first time, I want to download latest create file from FTP.
I want to download latest file of specific folder. Below is my code for that. But it is downloading all the files of the folder not the latest one.
ftp -in ftp.abc.com << SCRIPTEND
user xyz xyz
binary
cd Rpts/
mget ls -t -r | tail -n 1
quit
SCRIPTEND
help me with this, please?
Try using wget or lftp utility instead, it compares file time/date and AFAIR its purpose is ftp scripting. Switch to ssh/rsync if possible, you can read a bit about lftp instead of rsync here:
https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp
Probably the easiest way is to link last version on server side to "current", and always get the file pointed. If you're not admin of the server, you need to list all files with date/time, grab the information, parse it, decide which one is newest, in the meantime state on the server can change, and you find yourself in more complicated solution than it's worth.
The point is, that "ls" sorts output in some way, and time may not be default. There are switches to sort it e.g. base on modification time, however even when server responds with OK on ls -t , you can't be sure it really supports sorting, it can just ignore all switches and always return the same list, that's why admins usually use "current" link (ln -s). If there's no "current", to make sure you have the right file, you need to parse list anyway ( ls -al ).
http://www.catb.org/esr/writings/unix-koans/shell-tools.html
Looking at the code, the line
mget ls -t -r | tail -n 1
doesn't do what you think. It actually grabs all of the output of ls -t and then tail processes the output of mget. You could replace this line with
mget $(ls -t -r | tail -n 1)
but I am not sure if ftp will support such a call...
Try using an FTP client other than ftp. For example, curlftpfs available at curlftpfs.sourceforge.net is a good candidate as it allows you to mount an FTP to a directory as if it is a local folder and then run different commands on the files there (including find, grep, etc.). Take a look at this article.
This way, since the output comes form a local command, you'd be more certain that ls -t returns a properly sorted list.
Btw, it's a bit less convoluted to use ls -t | head -1 than ls -t -r | tail -1. They produce the same result but why reverse and grab from the tail when you can just grab the head :)
If you use curlftpfs then your script would be something like this (assuming server ftp.abc.com and user xyz with password xyz).
mkdir /tmp/ftpsession
curlftpfs ftp://xyz:xyz#ftp.abc.com /tmp/ftpsession
cd /tmp/ftpsession/Rpts
cp -Rpf $(ls -t | head -1) /your/destination/folder/or/file
cd -
umount /tmp/ftpsession
My Solution is this:
curl 'ftp://server.de/dir/'$(curl 'ftp://server.de/dir/' 2>/dev/null | tail -1 | awk '{print $(NF)}')

Resources