Reverse directory search on linux command line - linux

I have a custom linux prompt which displays various useful nuggets of information. As I use SVN in my daily job I thought it would be nice to further customize my prompt with information as to the current workspace URL. This was mostly prompted by a recent case where I had switched to a branch then forgot I had done so. Confusion abounded so, in a bid to avoid this happening again, I thought this seemed like a good idea.
This has already been achieved by others so I could just follow their examples but I also like to work things out from basic principles. One thing that I observed about other peoples solutions was that they tended to execute 'svn info' with no regard to context. Not a problem in and of itself but I thought it might be nice to test for the presence of the ubiquitous '.svn' directory before invoking 'svn info'.
I arrived at this partial solution:
if [ -d './.svn' ] ; then svn info | sed -n -e '/^URL/ s/.*svn//p' ; fi;
In the presence of a '.svn' directory I invoke 'svn info' then use sed to spit out the portion of the URL in which I am interested.
The problem comes however from the fact that, since svn 1.7, '.svn' is not ubiquitous!
I had thought that I might replace the test for the directory with a call to 'find' to perform a reverse directory search to search up the directory tree ... except there doesn't appear to be such an ability.
Other than dropping the test for '.svn' entirely, can anybody suggest how I might test for the presence of said folder in the current location and all parent folders?
Many thanks.

First of all. Don't use a working copy for multiple branches/trunk. There's simply no reason for it. It usually takes less than five minutes to checkout a particular branch of a particular project. And, in this day and age of gigabyte and terabyte sized hard drives, there's just no reason to save the room. (My first hard drive was 40 megabytes. And, I use to lord over my coworkers who had mere 10 and 20 megabyte hard drives).
What little time and disk space you save will be lost the first time you accidentally used the wrong branch because you forgot that you've switched.
The best way to check to see if you're in a Subversion working copy is to run svn info and see what the exit value is. If it's not zero, you're not in a Subversion working directory.
If you really want to have the repo root (or something similar) in your prompt, I suggest a sequence like this in your prompt command:
PS1="\u#\h:\w (\$(svn info --xml 2> /dev/null | sed -n '/<relative-url>/s/<.*>\(.*\)<.*>/\1/p'))\n$ "

This function will walk up the tree from your current directory looking for a ".svn" directory:
is_svn () {
local dir=$PWD
while [[ $dir != "/" ]]; do
[[ -d "$dir/.svn" ]] && return 0
dir=$(dirname "$dir")
done
return 1
}
Then, your prompt can include something like
$( is_svn && svn info | ... )

Related

RH Linux Bash Script help. Need to move files with specific words in the file

I have a RedHat linux box and I had written a script in the past to move files from one location to another with a specific text in the body of the file.
I typically only write scripts once a year so every year I forget more and more... That being said,
Last year I wrote this script and used it and it worked.
For some reason, I can not get it to work today and I know it's a simple issue and I shouldn't even be asking for help but for some reason I'm just not looking at it correctly today.
Here is the script.
ls -1 /var/text.old | while read file
do
grep -q "to.move" $file && mv $file /var/text.old/TBD
done
I'm listing all the files inside the /var/text.old directory.
I'm reading each file
then I'm grep'ing for "to.move" and holing the results
then I'm moving the resulting found files to the folder /var/text.old/TBD
I am an admin and I have rights to the above files and folders.
I can see the data in each file
I can mv them manually
I have use pwd to grab the correct spelling of the directory.
If anyone can just help me to see what the heck I'm missing here that would really make my day.
Thanks in advance.
UPDATE:
The files I need to move do not have Whitespaces.
The Error I'm getting is as follows:
grep: 9829563.msg: No such file or directory
NOTE: the file "982953.msg" is one of the files I need to move.
Also note: I'm getting this error for every file in the directory that I'm listing.
You didn't post any error, but I'm gonna take a guess and say that you have a filename with a space or special shell character.
Let's say you have 3 files, and ls -1 gives us:
hello
world
hey there
Now, while splits on the value of the special $IFS variable, which is set to <space><tab><newline> by default.
So instead of looping of 3 values like you expect (hello, world, and hey there), you loop over 4 values (hello, world, hey, and there).
To fix this, we can do 2 things:
Set IFS to only a newline:
IFS="
"
ls -1 /var/text.old | while read file
...
In general, I like setting IFS to a newline at the start of the script, since I consider this to be slightly "safer", but opinions on this probably vary.
But much better is to not parse the output of ls, and use for:
for file in /var/text.old/*`; do
This won't fork any external processes (piping to ls to while starts 2), and behaves "less surprising" in other ways. See here for some examples.
The second problem is that you're not quoting $file. You should always quote pathnames with double quoted: "$file" for the same reasons. If $file has a space (or a special shell character, such as *, the meaning of your command changes:
file=hey\ *
mv $file /var/text.old/TBD
Becomes:
mv hey * /var/text.old/TBD
Which is obviously very different from what you intended! What you intended was:
mv "hey *" /var/text.old/TBD

Find a string in Perforce file without syncing

Not sure if this is possible or not, but I figured I'd ask to see if anyone knows. Is it possible to find a file containing a string in a Perforce repository? Specifically, is it possible to do so without syncing the entire repository to a local directory first? (It's quite large - I don't think I'd have room even if I deleted lots of stuff - that's what the archive servers are for anyhow.)
There's any number of tools that can search through files in a local directory (I personally use Agent Ransack, but it's just one of many), but these will not search a remote Perforce directory, unless there's some (preferably free) tool I'm not aware of that has this capability, or maybe some hidden feature within Perforce itself?
p4 grep is your friend. From the perforce blog
'p4 grep' allows users to use simple file searches as well as regular
expressions to search through file contents of head as well as earlier
revisions of files stored on the server. While not every single option
of a standard grep is supported, the most important options are
available. Here is the syntax of the command according to 'p4 help
grep':
p4 grep [ -a -i -n -v -A after -B before -C context -l -L -t -s -F -G ] -e pattern file[revRange]...
See also, the manual page.
Update: Note that there is a limitation on the number of files that Perforce will search in a single p4 grep command. Presumably this is to help keep the load on the server down. This manifests as an error:
Grep revision limit exceeded (over 10000).
If you have sufficient perforce permissions, you can use p4 configure to increase the dm.grep.maxrevs setting from this default of 10K to something larger. e.g. to set to 1 million:
p4 configure set dm.grep.maxrevs=1M
If you do not have permission to change this, you can work around it by splitting the p4 grep up into multiple commands over the subdirectories. You may have need to split further into sub-subdirectories etc depending on your depot structure.
For example, this command can be used at a bash shell to search each subdirectory of //depot/trunk one at a time. Makes use of the p4 dirs command to obtain the list of subdirectories from the server.
for dir in $(p4 dirs //depot/trunk/*); do
p4 grep -s -i -e the_search_string $dir/...
done
Actually, solved this one myself. p4 grep indeed does the trick. Doc here. You have to carefully narrow it down before it'll work properly - on our server at least you have to get it down to < 10000 files. I also had to redirect the output to a file instead of printing it out in the console, adding > output.txt, because there's a limit of 4096 chars per line in the console and the file paths are quite long.
It's not something you can do with the standard perforce tools. One helpful command might be p4 print but it's not really faster than syncing I would think.
This is a big if but if you have access to the server you can run agent ransack on the perforce directory. Perforce stores all versioned files on disk, it's only the metadata that's in a database.

How can I loop through some files in my script?

I am very much a beginner at this and have searched for answers my question but have not found any that I understand how to implement. Any help would be greatly appreciated.
I have a script:
FILE$=`ls ~/Desktop/File_Converted/`
mkdir /tmp/$FILE
mv ~/Desktop/File_Converted/* /tmp/$FILE/
So I can use Applescript to say when a file is dropped into this desktop folder, create a temp directory, move the file there and the do other stuff. I then delete the temp directory. This is fine as far as it goes, but the problem is that if another file is dropped into File_Converted directory before I am doing doing stuff to the file I am currently working with it will change the value of the $FILE variable before the script has completed operating on the current file.
What I'd like to do is use a variable set up where the variable is, say, $FILE1. I check to see if $FILE1 is defined and, if not, use it. If it is defined, then try $FILE2, etc... In the end, when I am done, I want to reclaim the variable so $FILE1 get set back to null again and the next file dropped into the File_Converted folder can use it again.
Any help would be greatly appreciated. I'm new to this so I don't know where to begin.
Thanks!
Dan
Your question is a little difficult to parse, but I think you're not really understanding shell globs or looping constructs. The globs are expanded based on what's there now, not what might be there earlier or later.
DIR=$(mktemp -d)
mv ~/Desktop/File_Converted/* "$DIR"
cd "$DIR"
for file in *; do
: # whatever you want to do to "$file"
done
You don't need a LIFO -- multiple copies of the script run for different events won't have conflict over their variable names. What they will conflict on is shared temporary directories, and you should use mktemp -d to create a temporary directory with a new, unique, and guaranteed-nonconflicting name every time your script is run.
tempdir=$(mktemp -t -d mytemp.XXXXXX)
mv ~/Desktop/File_Converted/* "$tempdir"
cd "$tempdir"
for f in *; do
...whatever...
done
What you describe is a classic race condition, in which it is not clear that one operation will finish before a conflicting operation starts. These are not easy to handle, but you will learn so much about scripting and programming by handling them that it is well worth the effort to do so, even just for learning's sake.
I would recommend that you start by reviewing the lockfile or flock manpage. Try some experiments. It looks as though you probably have the right aptitude for this, for you are asking exactly the right questions.
By the way, I suspect that you want to kill the $ in
FILE$=`ls ~/Desktop/File_Converted/`
Incidentally, #CharlesDuffy correctly observes that "using ls in scripts is indicative of something being done wrong in and of itself. See mywiki.wooledge.org/ParsingLs and mywiki.wooledge.org/BashPitfalls." One suspects that the suggested lockfile exercise will clear up both points, though it will probably take you several hours to work through it.

Need to monitor directory change, and perform action

1st of all: I'm not programmer, neither Linux guru, just have to work with Linux, Oracle, shell scripts.
My current task is to monitor a table in Oracle (tool: sqlplus), and if it contains a certain row, then watch a linux directory for a growing tmp file, and log its attributes (e.g. ls -l), in every 5 second.
The most important part is: this tmp file will be deleted if the above record is deleted from the oracle table, and I need the last contents of this tmp file.
I can't control the Oracle data, just got query rights.
The available tools are: bash, awk, sed, some old version of perl, ruby (not 1.9*), and python (2.5). I don't have install rights, so most of the outside libraries are not accessible. I know I can run some libraries from my $HOME, but I don't have internet connection on that machine: so can't download any library.
Inotify is not available (older kernel).
Any idea where to start/how to do it? Thanks in advance.
How about creating a hard link in another directory, then, when the file "disappears" in the original location, the hard link will still have access to the content.
This is ugly and naive... but...
#!/bin/bash
WASTHERE=0
MONITORING=/tmp/whatever.dat
LASTBACKUP=/tmp/mybackup.dat
LOGFILE=/tmp/mylog.log
# Just create an empty file to start with
touch "$LASTBACKUP"
while [ 1 ];
do
if [[ ! -e "$MONITORING" ]]; then
if [[ $WASTHERE -ne 0 ]]; then
echo "File is gone! Do something with $LASTBACKUP";
WASTHERE=0
fi
else
WASTHERE=1
ls -l "$MONITORING" >> $LOGFILE
cp "$MONITORING" "$LASTBACKUP"
fi
sleep 5
done
The unfortunate part about this is that if anything happens to the file being 'monitored' while the script is sleeping (content is written to it, for example) and the file is then deleted before the script wakes up, the newly written content will not be in the 'backup.'

Keep Remote Directory Up-to-date

I absolutely love the Keep Remote Directory Up-to-date feature in Winscp. Unfortunately, I can't find anything as simple to use in OS X or Linux. I know the same thing can theoretically be accomplished using changedfiles or rsync, but I've always found the tutorials for both tools to be lacking and/or contradictory.
I basically just need a tool that works in OSX or Linux and keeps a remote directory in sync (mirrored) with a local directory while I make changes to the local directory.
Update
Looking through the solutions, I see a couple which solve the general problem of keeping a remote directory in sync with a local directory manually. I know that I can set a cron task to run rsync every minute, and this should be fairly close to real time.
This is not the exact solution I was looking for as winscp does this and more: it detects file changes in a directory (while I work on them) and then automatically pushes the changes to the remote server. I know this is not the best solution (no code repository), but it allows me to very quickly test code on a server while I develop it. Does anyone know how to combine rsync with any other commands to get this functionality?
lsyncd seems to be the perfect solution. it combines inotify (kernel builtin function which watches for file changes in a directory trees) and rsync (cross platform file-syncing-tool).
lsyncd -rsyncssh /home remotehost.org backup-home/
Quote from github:
Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.
How "real-time" do you want the syncing? I would still lean toward rsync since you know it is going to be fully supported on both platforms (Windows, too, with cygwin) and you can run it via a cron job. I have a super-simple bash file that I run on my system (this does not remove old files):
#!/bin/sh
rsync -avrz --progress --exclude-from .rsync_exclude_remote . remote_login#remote_computer:remote_dir
# options
# -a archive
# -v verbose
# -r recursive
# -z compress
Your best bet is to set it up and try it out. The -n (--dry-run) option is your friend!
Keep in mind that rsync (at least in cygwin) does not support unicode file names (as of 16 Aug 2008).
What you want to do for linux remote access is use 'sshfs' - the SSH File System.
# sshfs username#host:path/to/directory local_dir
Then treat it like an network mount, which it is...
A bit more detail, like how to set it up so you can do this as a regular user, on my blog
If you want the asynchronous behavior of winSCP, you'll want to use rsync combined with something that executes it periodically. The cron solution above works, but may be overkill for the winscp use case.
The following command will execute rsync every 5 seconds to push content to the remote host. You can adjust the sleep time as needed to reduce server load.
# while true; do rsync -avrz localdir user#host:path; sleep 5; done
If you have a very large directory structure and need to reduce the overhead of the polling, you can use 'find':
# touch -d 01/01/1970 last; while true; do if [ "`find localdir -newer last -print -quit`" ]; then touch last; rsync -avrz localdir user#host:path; else echo -ne .; fi; sleep 5; done
And I said cron may be overkill? But at least this is all just done from the command line, and can be stopped via a ctrl-C.
kb
To detect changed files, you could try fam (file alteration monitor) or inotify. The latter is linux-specific, fam has a bsd port which might work on OS X. Both have userspace tools that could be used in a script together with rsync.
I have the same issue. I loved winscp "keep remote directory up to date" command. However, in my quest to rid myself of Windows, I lost winscp. I did write a script that uses fileschanged and rsync to do something similar much closer to real time.
How to use:
Make sure you have fileschanged installed
Save this script in /usr/local/bin/livesync or somewhere reachable in your $PATH and make it executable
Use Nautilus to connect to the remote host (sftp or ftp)
Run this script by doing livesync SOURCE DEST
The DEST directory will be in /home/[username]/.gvfs/[path to ftp scp or whatever]
A Couple downsides:
It is slower than winscp (my guess is because it goes through Nautilus and has to detect changes through rsync as well)
You have to manually create the destination directory if it doesn't already exist. So if you're adding a directory, it won't detect and create the directory on the DEST side.
Probably more that I haven't noticed yet
Also, do not attempt to synchronize a SRC directory named "rsyncThis". That will probably not be good :)
#!/bin/sh
upload_files()
{
if [ "$HOMEDIR" = "." ]
then
HOMEDIR=`pwd`
fi
while read input
do
SYNCFILE=${input#$HOMEDIR}
echo -n "Sync File: $SYNCFILE..."
rsync -Cvz --temp-dir="$REMOTEDIR" "$HOMEDIR/$SYNCFILE" "$REMOTEDIR/$SYNCFILE" > /dev/null
echo "Done."
done
}
help()
{
echo "Live rsync copy from one directory to another. This will overwrite the existing files on DEST."
echo "Usage: $0 SOURCE DEST"
}
case "$1" in
rsyncThis)
HOMEDIR=$2
REMOTEDIR=$3
echo "HOMEDIR=$HOMEDIR"
echo "REMOTEDIR=$REMOTEDIR"
upload_files
;;
help)
help
;;
*)
if [ -n "$1" ] && [ -n "$2" ]
then
fileschanged -r "$1" | "$0" rsyncThis "$1" "$2"
else
help
fi
;;
esac
You could always use version control, like SVN, so all you have to do is have the server run svn up on a folder every night. This runs into security issues if you are sharing your files publicly, but it works.
If you are using Linux though, learn to use rsync. It's really not that difficult as you can test every command with -n. Go through the man page, the basic format you will want is
rsync [OPTION...] SRC... [USER#]HOST:DEST
the command I run from my school server to my home backup machine is this
rsync -avi --delete ~ me#homeserv:~/School/ >> BackupLog.txt
This takes all of the files in my home directory (~) and uses rsync's archive mode (-a), verbosly (-v), lists all of the changes made (-i), while deleting any files that don't exist anymore (--delete) and puts the in the Folder /home/me/School/ on my remote server. All of the information it prints out (what was copied, what was deleted, etc.) is also appended to the file BackupLog.txt
I know that's a whirlwind tour of rsync, but I hope it helps.
The rsync solutions are really good, especially if you're only pushing changes one way. Another great tool is unison -- it attempts to syncronize changes in both directions. Read more at the Unison homepage.
Great question I have searched answer for hours !
I have tested lsyncd and the problem is that the default delay is far too long and no example command line give the -delay option.
Other problem is that by default rsync ask password each time !
Solution with lsyncd :
lsyncd --nodaemon -rsyncssh local_dir remote_user#remote_host remote_dir -delay .2
other way is to use inotify-wait in a script :
while inotifywait -r -e modify,create,delete local_dir ; do
# if you need you can add wait here
rsync -avz local_dir remote_user#remote_host:remote_dir
done
For this second solution you will have to install inotify-tools package
To suppress the need to enter password at each change, simply use ssh-keygen :
https://superuser.com/a/555800/510714
It seems like perhaps you're solving the wrong problem. If you're trying to edit files on a remote computer then you might try using something like the ftp plugin for jedit. http://plugins.jedit.org/plugins/?FTP This ensures that you have only one version of the file so it can't ever be out of sync.
Building off of icco's suggestion of SVN, I'd actually suggest that if you are using subversion or similar for source control (and if you aren't, you should probably start) you can keep the production environment up to date by putting the command to update the repository into the post-commit hook.
There are a lot of variables in how you'd want to do that, but what I've seen work is have the development or live site be a working copy and then have the post-commit use an ssh key with a forced command to log into the remote site and trigger an svn up on the working copy. Alternatively in the post-commit hook you could trigger an svn export on the remote machine, or a local (to the svn repository) svn export and then an rsync to the remote machine.
I would be worried about things that detect changes and push them, and I'd even be worried about things that ran every minute, just because of race conditions. How do you know it's not going to transfer the file at the very same instant it's being written to? Stumble across that once or twice and you'll lose all of the time-saving advantage you had by constantly rsyncing or similar.
Will DropBox (http://www.getdropbox.com/) do what you want?
User watcher.py and rsync to automate this. Read the following step by step instructions here:
http://kushellig.de/linux-file-auto-sync-directories/
I used to have the same setup under Windows as you, that is a local filetree (versioned) and a test environment on a remote server, which I kept mirrored in realtime with WinSCP. When I switched to Mac I had to do quite some digging before I was happy, but finally ended up using:
SmartSVN as my subversion client
Sublime Text 2 as my editor (already used it on Windows)
SFTP-plugin to ST2 which handles the uploading on save (sorry, can't post more than 2 links)
I can really recommend this setup, hope it helps!
I have been using WinSCP on Wine for a few years now and it works fine for the syncing operations you mention.
Here are some instructions I posted to Github on how to setup via wine: WinSCP_On_Wine
Just be aware that WinSCP is not being actively tested on wine so there may be some quirky issues. however, I use it daily on Ubuntu 20.04 for all my devops and have never lost a file and rarely experience any of such quirks.
You can also use Fetch as an SFTP client, and then edit files directly on the server from within that. There are also SSHFS (mount an ssh folder as a Volume) options. This is in line with what stimms said - are you sure you want stuff kept in sync, or just want to edit files on the server?
OS X has it's own file notifications system - this is what Spotlight is based upon. I haven't heard of any program that uses this to then keep things in sync, but it's certainly conceivable.
I personally use RCS for this type of thing:- whilst it's got a manual aspect, it's unlikely I want to push something to even the test server from my dev machine without testing it first. And if I am working on a development server, then I use one of the options given above.
Well, I had the same kind of problem and it is possible using these together: rsync, SSH Passwordless Login, Watchdog (a Python sync utility) and Terminal Notifier (an OS X notification utility made with Ruby. Not needed, but helps to know when the sync has finished).
I created the key to Passwordless Login using this tutorial from Dreamhost wiki: http://cl.ly/MIw5
1.1. When you finish, test if everything is ok… if you can't Passwordless Login, maybe you have to try afp mount. Dreamhost (where my site is) does not allow afp mount, but allows Passwordless Login. In terminal, type:
ssh username#host.com
You should login without passwords being asked :P
I installed the Terminal Notifier from the Github page: http://cl.ly/MJ5x
2.1. I used the Gem installer command. In Terminal, type:
gem install terminal-notifier
2.3. Test if the notification works.In Terminal, type:
terminal-notifier -message "Starting sync"
Create a sh script to test the rsync + notification. Save it anywhere you like, with the name you like. In this example, I'll call it ~/Scripts/sync.sh I used the ".sh extension, but I don't know if its needed.
#!/bin/bash
terminal-notifier -message "Starting sync"
rsync -azP ~/Sites/folder/ user#host.com:site_folder/
terminal-notifier -message "Sync has finished"
3.1. Remember to give execution permission to this sh script. In Terminal, type:
sudo chmod 777 ~/Scripts/sync.sh
3.2. Run the script and verify if the messages are displayed correctly and the rsync actually sync your local folder with the remote folder.
Finally, I downloaded and installed Watchdog from the Github page: http://cl.ly/MJfb
4.1. First, I installed the libyaml dependency using Brew (there are lot's of help how to install Brew - like an "aptitude" for OS X). In Terminal, type:
brew install libyaml
4.2. Then, I used the "easy_install command". Go the folder of Watchdog, and type in Terminal:
easy_install watchdog
Now, everything is installed! Go the folder you want to be synced, change this code to your needs, and type in Terminal:
watchmedo shell-command
--patterns="*.php;*.txt;*.js;*.css" \
--recursive \
--command='~/Scripts/Sync.sh' \
.
It has to be EXACTLY this way, with the slashes and line breaks, so you'll have to copy these lines to a text editor, change the script, paste in terminal and press return.
I tried without the line breaks, and it doesn't work!
In my Mac, I always get an error, but it doesn't seem to affect anything:
/Library/Python/2.7/site-packages/argh-0.22.0-py2.7.egg/argh/completion.py:84: UserWarning: Bash completion not available. Install argcomplete.
Now, made some changes in a file inside the folder, and watch the magic!
I'm using this little Ruby-Script:
#!/usr/bin/env ruby
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Rsyncs 2Folders
#
# watchAndSync by Mike Mitterer, 2014 <http://www.MikeMitterer.at>
# with credit to Brett Terpstra <http://brettterpstra.com>
# and Carlo Zottmann <https://github.com/carlo/haml-sass-file-watcher>
# Found link on: http://brettterpstra.com/2011/03/07/watch-for-file-changes-and-refresh-your-browser-automatically/
#
trap("SIGINT") { exit }
if ARGV.length < 2
puts "Usage: #{$0} watch_folder sync_folder"
puts "Example: #{$0} web keepInSync"
exit
end
dev_extension = 'dev'
filetypes = ['css','html','htm','less','js', 'dart']
watch_folder = ARGV[0]
sync_folder = ARGV[1]
puts "Watching #{watch_folder} and subfolders for changes in project files..."
puts "Syncing with #{sync_folder}..."
while true do
files = []
filetypes.each {|type|
files += Dir.glob( File.join( watch_folder, "**", "*.#{type}" ) )
}
new_hash = files.collect {|f| [ f, File.stat(f).mtime.to_i ] }
hash ||= new_hash
diff_hash = new_hash - hash
unless diff_hash.empty?
hash = new_hash
diff_hash.each do |df|
puts "Detected change in #{df[0]}, syncing..."
system("rsync -avzh #{watch_folder} #{sync_folder}")
end
end
sleep 1
end
Adapt it for your needs!
If you are developing python on remote server, Pycharm may be a good choice to you. You can synchronize your remote files with your local files utilizing pycharm remote development feature. The guide link as:
https://www.jetbrains.com/help/pycharm/creating-a-remote-server-configuration.html

Resources