I am managing a website using git. One of the requirements for the git repository is that bare = true. It uses a post-receive hook to manage pushes from my local computer. The problem is that sometimes I would like to make changes to a WordPress directory on my website using the wp-admin view online. So then I would just ssh into the directory and run git --work-tree="BLAH" add . and git --work-tree="BLAH" commit -m "BLAH". Is there a way to set up an alias, like alias git="git --work-tree=\"BLAH\"" and have that work for all git commands?
There are times when alias are a great tool. Then there are times when things start getting too complicated where a shell script is better.
To create a single command that executes other commands just create a file (maybe call it git-add-all) then type the following:
#! /bin/bash
git --work-tree="BLAH" add .
git --work-tree="BLAH" commit -m "BLAH"
Then you can run the script by simply doing:
bash git-add-all
Even better, make the script executable:
chmod +x git-add-all
Then you can use it like any command:
./git-add-all
Advanced tips:
To be able to run the script from any git directory you can copy/move the file to one of the directories in your $PATH. For example /usr/loca/bin. Then you can simply run git-add-all instead of ./git-add-all.
Even better is to create your own personal scripts directory and include it in $PATH. I personally use ~/bin. To add the directory to $PATH you just need to add the following to .bashrc or .profile:
export PATH=/home/username/bin:$PATH
or if you're doing this for the root user:
export PATH=/root/bin:$PATH
In case anyone is curious how I solved it (thanks to shellter's comment), I wrote a bash script then prompted the user for input like so:
#!/bin/bash
function fix {
git --work-tree="PATH_TO_WORKING_TREE" $1
}
echo -n "git "
read -e INPUT
until [ "$INPUT" = "quit" ]; do
fix $INPUT
echo -n "git "
read -e INPUT
done
Running it:
user#server [repo.git] $ git-fix
git status
# On branch master
nothing to commit (working directory clean)
git quit
There is a .bashrc file in Linux. You can edit it for creating alias for your favorite and frequently used commands.
To create an alias permanently add the alias to your .bashrc file
gedit ~/.bashrc
The alias should look like:
alias al='cmd'
You can read more about it over here.
On my project, there are two branches I'm working on (Develop and Release), each of which makes generous use of submodules. The Develop branch uses about twice as many submodules as Release, because it is where we test ideas.
When I switch branches from Develop to Release, the directories of Develop-specific submodules stay where they are, and so they become untracked. This makes things a bit confusing for me, because I do occasionally need to add or remove submodules from Release as well, and the git status message becomes a long list of untracked modules, some of which I want to use and some I don't.
What I would like to do is remove all untracked submodules from my project as soon as I switch from Develop to Release, so that I'm working with a "clean slate", (IE no untracked submodules sitting in my working directory).
I have found several solutions to removing individual submodules one at a time, such as here: How do I remove a submodule?
However solutions such as this assume that the git submodules are in use and being tracked (which they are not), and it is also a pain in the neck to remove them one at a time when I'm working with something like 15-20 submodules.
I have also tried piping linux commands like so:
git ls-files --others --exclude-standard | rm -rf
But the command does not appear to do anything. I have also tried using the same with git rm -rf, to no avail.
Does anyone know if there is an easy to remove all untracked git submodules from a working directory? Any advice anyone can share on this matter would be greatly appreciated. Thank you!
With some advice from helpful folks in the comments section, I determined that there is no obvious, non-tedious solution to this problem. Instead I created a bash script that does the job for me. Here it is, in case anyone else has the same issue:
#!/bin/bash
clear
git ls-files --others --directory --exclude-standard
echo
read -r -p "Are you sure you want to remove these untracked submodules? [y/N] " response
if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]
then
git ls-files --others --directory --exclude-standard | while read line; do
rm $line/.git &> /dev/null;
done
git clean -d -f
fi
Easy steps for people not super comfortable with Bash scripts:
Step 1: Copy the above script into a file and name it 'cleanUSM'. Save the file to /usr/bin. If you are having trouble saving it or finding /usr/bin, just save it to your current directory, and then use 'sudo mv cleanUSM /usr/bin/cleanUSM' to get it where it needs to go.
Step 2: From within your root directory, run the command 'cleanUSM'
Thanks to everyone who contributed!
I've got a git workflow set up similar to this http://joemaller.com/990/a-web-focused-git-workflow/. Essentially i have local repositories that report to a remote repository that is bare. Then I have my deployment directory accessible via web also set up as a repository that also reports to the same bare repository.
It's set up with git hooks so that when local developer pushes changes to remote repository, hook goes into web folder and pulls from the repository so it always has the latest and greatest. All works pretty good.
My crux is that I'm looking to appease to the people who don't want to you GIT and just want to upload files to the web folder via FTP. I've kinda got this working by setting up an inotifywait monitor on the web folder for whenever files are written, modified, moved, deleted, created, etc... my bash script for this is as follows.
!/bin/sh
inotifywait #*.swp -rm -e modify,move,create,delete,delete_self,unmount /var/www/html/mysite | while read
do
now=$(date +"%m_%d_%Y:%T")
echo $now >> temp.txt
cd /var/www/html/mysite || exit
git add --all
git commit -m "ftp update $now" -a
done
This too actually works, but what I'm observing is that I'm stuck in the while loop once I trigger the inotifywait by modifying a file in my web folder. Anyone have any ideas on this? Ideally would love it to do it's thing and not be stuck in the while loop continuously running unnecessary git commands.
Thanks!
The man page for inotifywait suggests that you do a different loop style:
while inotifywait -e modify /var/log/messages; do
…
done
have you tried that?
All of my code base is being stored in a subversion repository that I disperse amongst my load balanced Apache web servers, making it easy to check out code, run updates, and seamlessly get my code in development onto production.
One of the inconveniences that I'm sure there is a easy work around for (other than executing a script upon every checkout), is getting the Linux permissions set (back) on files that are updated or checked out with subversion. Our security team has it set that the Owner and Group set in the httpd.conf files, and all directories within the documentRoot receive permissions of 700, all non-executable files (e.g. *.php, *.smarty, *.png) receive Linux permissions of 600, all executable files receive 700 (e.g. *.sh, *.pl, *.py). All files must have owner and group set to apache:apache in order to be read by the httpd service since only the file owner is set to have access via the permissions.
Every time I run an svn update, or svn co, even though the files may not be created (i.e. svn update), I'm finding that the ownership of the files is getting set to the account that is running the svn commands, and often times, the file permissions are getting set to something other than what they were originally (i.e. a .htm file before an update is 600, but after and svn update, it gets set to 755, or even 777).
What is the easiest way to bypass subversion's attempts at updating the file permissions and ownership? Is there something that can be done within the svn client, or on the Linux server to retain the original file permissions? I'm running RHEL5 (and now 6 on a few select instances).
the owner of the files will be set to the user that is running the svn command because of how it implements the underlying up command - it removes and replaces files that are updated, which will cause the ownership to 'change' to the relevant user. The only way to prevent this is to actually perform the svn up as the user that the files are supposed to be owned as. If you want to ensure that they're owned by a particular user, then run the command as that user.
With regards to the permissions, svn is only obeying the umask settings of the account - it's probably something like 066 - in order to ensure that the file is inaccessible to group and other accounts, you need to issue 'umask 077' before performing the svn up, this ensures that the files are only accessible to the user account issuing the command.
I'd pay attention to the security issue of deploying the subversion data into the web server unless the .svn directories are secured.
You can store properties on a file in Subversion (see http://svnbook.red-bean.com/en/1.0/ch07s02.html). You're particularly interested in the svn:executable property, which will make sure that the executable permission is stored.
There's no general way to do this for all permissions, though. Subversion doesn't store ownership either - it assumes that, if you check something out, you own it.
You can solve this. Use setgid.
You have apache:apache running the server
Set group permission on all files and directories. The server will read files by it's group
Set setgid on all directories - only on directories: setting this on files has a different function
Example ('2' is setgid):
chmod 2750
Make apache the group of all directories
What happens is
New files and directories created by any account will be owned by the apache group
New directories will inherit the setgid and thus preserve the structure without any effort
See https://en.wikipedia.org/wiki/Setuid#setuid_and_setgid_on_directories
One thing you may consider doing is installing the svn binary outside your path, and putting a replacement script (at and called /usr/bin/svn, or whatever) in the path. The script would look something like this:
#!/bin/sh
# set umask, whatever else you need to do before svn commands
/opt/svn/svn $* # pass all arguments to the actual svn binary, stored outside the PATH
# run chmod, whatever else you need to do after svn commands
A definite downside is that you'll probably have to do some amount of parsing of the arguments passed to svn, i.e. so you can pass the same path to your chmod, not run chmod for most svn commands, etc.
There are also probably some security considerations here. I don't know what your deployment environment is like, but you should probably investigate that a bit further.
I wrote a small script that stores permissions and owner, executes your SVN command and restores permissions and owner.
It is probably is not hackerproof but for private use it does the job.
svnupdate.sh:
#!/usr/bin/env bash
if [ $# -eq 0 ]; then
echo "Syntax: $0 <filename>"
exit
fi
IGNORENEXT=0
COMMANDS=''
FILES='';
for FILENAME in "$#"
do
if [[ $IGNORENEXT > 0 ]]; then
IGNORENEXT=0
else
case $FILENAME in
# global, shift argument if needed
--username|--password|--config-dir|--config-option)
IGNORENEXT=1
;;
--no-auth-cache|--non-interactive|--trust-server-cert)
;;
# update arguments, shift argument if needed
-r|--revision|--depth|--set-depth|--diff3-cmd|--changelist|--editor-cmd|--accept)
IGNORENEXT=1
;;
-N|--non-recursive|-q|--quiet|--force|--ignore-externals)
;;
*)
if [ -f $FILENAME ]; then
FILES="$FILES $FILENAME"
OLDPERM=$(stat -c%a $FILENAME)
OLDOWNER=$(stat -c%U $FILENAME)
OLDGROUP=$(stat -c%G $FILENAME)
FILECOMMANDS="chmod $OLDPERM $FILENAME; chown $OLDOWNER.$OLDGROUP $FILENAME;"
COMMANDS="$COMMANDS $FILECOMMANDS"
echo "COMMANDS: $FILECOMMANDS"
else
echo "File not found: $FILENAME"
fi
;;
esac
fi
done
OUTPUT=$(svn update "$#")
echo "$OUTPUT"
if [[ ( $? -eq 0 ) && ( $OUTPUT != Skipped* ) && ( $OUTPUT != "At revision"* ) ]]; then
bash -c "$COMMANDS"
ls -l $FILES
fi
I also had a similar problem.
I found a cool script: asvn (Archive SVN).
You can download it here:
https://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/asvn
Description:
Archive SVN (asvn) will allow the recording of file types not
normally handled by svn. Currently this includes devices,
symlinks and file ownership/permissions.
Every file and directory has a 'file:permissions' property set and
every directory has a 'dir:devices' and 'dir:symlinks' for
recording the extra information.
Run this script instead of svn with the normal svn arguments.
This blog entry (which helps me to find script) http://jon.netdork.net/2010/06/28/configuration-management-part-ii-setting-up-svn/ shows a simple usage.
I absolutely love the Keep Remote Directory Up-to-date feature in Winscp. Unfortunately, I can't find anything as simple to use in OS X or Linux. I know the same thing can theoretically be accomplished using changedfiles or rsync, but I've always found the tutorials for both tools to be lacking and/or contradictory.
I basically just need a tool that works in OSX or Linux and keeps a remote directory in sync (mirrored) with a local directory while I make changes to the local directory.
Update
Looking through the solutions, I see a couple which solve the general problem of keeping a remote directory in sync with a local directory manually. I know that I can set a cron task to run rsync every minute, and this should be fairly close to real time.
This is not the exact solution I was looking for as winscp does this and more: it detects file changes in a directory (while I work on them) and then automatically pushes the changes to the remote server. I know this is not the best solution (no code repository), but it allows me to very quickly test code on a server while I develop it. Does anyone know how to combine rsync with any other commands to get this functionality?
lsyncd seems to be the perfect solution. it combines inotify (kernel builtin function which watches for file changes in a directory trees) and rsync (cross platform file-syncing-tool).
lsyncd -rsyncssh /home remotehost.org backup-home/
Quote from github:
Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.
How "real-time" do you want the syncing? I would still lean toward rsync since you know it is going to be fully supported on both platforms (Windows, too, with cygwin) and you can run it via a cron job. I have a super-simple bash file that I run on my system (this does not remove old files):
#!/bin/sh
rsync -avrz --progress --exclude-from .rsync_exclude_remote . remote_login#remote_computer:remote_dir
# options
# -a archive
# -v verbose
# -r recursive
# -z compress
Your best bet is to set it up and try it out. The -n (--dry-run) option is your friend!
Keep in mind that rsync (at least in cygwin) does not support unicode file names (as of 16 Aug 2008).
What you want to do for linux remote access is use 'sshfs' - the SSH File System.
# sshfs username#host:path/to/directory local_dir
Then treat it like an network mount, which it is...
A bit more detail, like how to set it up so you can do this as a regular user, on my blog
If you want the asynchronous behavior of winSCP, you'll want to use rsync combined with something that executes it periodically. The cron solution above works, but may be overkill for the winscp use case.
The following command will execute rsync every 5 seconds to push content to the remote host. You can adjust the sleep time as needed to reduce server load.
# while true; do rsync -avrz localdir user#host:path; sleep 5; done
If you have a very large directory structure and need to reduce the overhead of the polling, you can use 'find':
# touch -d 01/01/1970 last; while true; do if [ "`find localdir -newer last -print -quit`" ]; then touch last; rsync -avrz localdir user#host:path; else echo -ne .; fi; sleep 5; done
And I said cron may be overkill? But at least this is all just done from the command line, and can be stopped via a ctrl-C.
kb
To detect changed files, you could try fam (file alteration monitor) or inotify. The latter is linux-specific, fam has a bsd port which might work on OS X. Both have userspace tools that could be used in a script together with rsync.
I have the same issue. I loved winscp "keep remote directory up to date" command. However, in my quest to rid myself of Windows, I lost winscp. I did write a script that uses fileschanged and rsync to do something similar much closer to real time.
How to use:
Make sure you have fileschanged installed
Save this script in /usr/local/bin/livesync or somewhere reachable in your $PATH and make it executable
Use Nautilus to connect to the remote host (sftp or ftp)
Run this script by doing livesync SOURCE DEST
The DEST directory will be in /home/[username]/.gvfs/[path to ftp scp or whatever]
A Couple downsides:
It is slower than winscp (my guess is because it goes through Nautilus and has to detect changes through rsync as well)
You have to manually create the destination directory if it doesn't already exist. So if you're adding a directory, it won't detect and create the directory on the DEST side.
Probably more that I haven't noticed yet
Also, do not attempt to synchronize a SRC directory named "rsyncThis". That will probably not be good :)
#!/bin/sh
upload_files()
{
if [ "$HOMEDIR" = "." ]
then
HOMEDIR=`pwd`
fi
while read input
do
SYNCFILE=${input#$HOMEDIR}
echo -n "Sync File: $SYNCFILE..."
rsync -Cvz --temp-dir="$REMOTEDIR" "$HOMEDIR/$SYNCFILE" "$REMOTEDIR/$SYNCFILE" > /dev/null
echo "Done."
done
}
help()
{
echo "Live rsync copy from one directory to another. This will overwrite the existing files on DEST."
echo "Usage: $0 SOURCE DEST"
}
case "$1" in
rsyncThis)
HOMEDIR=$2
REMOTEDIR=$3
echo "HOMEDIR=$HOMEDIR"
echo "REMOTEDIR=$REMOTEDIR"
upload_files
;;
help)
help
;;
*)
if [ -n "$1" ] && [ -n "$2" ]
then
fileschanged -r "$1" | "$0" rsyncThis "$1" "$2"
else
help
fi
;;
esac
You could always use version control, like SVN, so all you have to do is have the server run svn up on a folder every night. This runs into security issues if you are sharing your files publicly, but it works.
If you are using Linux though, learn to use rsync. It's really not that difficult as you can test every command with -n. Go through the man page, the basic format you will want is
rsync [OPTION...] SRC... [USER#]HOST:DEST
the command I run from my school server to my home backup machine is this
rsync -avi --delete ~ me#homeserv:~/School/ >> BackupLog.txt
This takes all of the files in my home directory (~) and uses rsync's archive mode (-a), verbosly (-v), lists all of the changes made (-i), while deleting any files that don't exist anymore (--delete) and puts the in the Folder /home/me/School/ on my remote server. All of the information it prints out (what was copied, what was deleted, etc.) is also appended to the file BackupLog.txt
I know that's a whirlwind tour of rsync, but I hope it helps.
The rsync solutions are really good, especially if you're only pushing changes one way. Another great tool is unison -- it attempts to syncronize changes in both directions. Read more at the Unison homepage.
Great question I have searched answer for hours !
I have tested lsyncd and the problem is that the default delay is far too long and no example command line give the -delay option.
Other problem is that by default rsync ask password each time !
Solution with lsyncd :
lsyncd --nodaemon -rsyncssh local_dir remote_user#remote_host remote_dir -delay .2
other way is to use inotify-wait in a script :
while inotifywait -r -e modify,create,delete local_dir ; do
# if you need you can add wait here
rsync -avz local_dir remote_user#remote_host:remote_dir
done
For this second solution you will have to install inotify-tools package
To suppress the need to enter password at each change, simply use ssh-keygen :
https://superuser.com/a/555800/510714
It seems like perhaps you're solving the wrong problem. If you're trying to edit files on a remote computer then you might try using something like the ftp plugin for jedit. http://plugins.jedit.org/plugins/?FTP This ensures that you have only one version of the file so it can't ever be out of sync.
Building off of icco's suggestion of SVN, I'd actually suggest that if you are using subversion or similar for source control (and if you aren't, you should probably start) you can keep the production environment up to date by putting the command to update the repository into the post-commit hook.
There are a lot of variables in how you'd want to do that, but what I've seen work is have the development or live site be a working copy and then have the post-commit use an ssh key with a forced command to log into the remote site and trigger an svn up on the working copy. Alternatively in the post-commit hook you could trigger an svn export on the remote machine, or a local (to the svn repository) svn export and then an rsync to the remote machine.
I would be worried about things that detect changes and push them, and I'd even be worried about things that ran every minute, just because of race conditions. How do you know it's not going to transfer the file at the very same instant it's being written to? Stumble across that once or twice and you'll lose all of the time-saving advantage you had by constantly rsyncing or similar.
Will DropBox (http://www.getdropbox.com/) do what you want?
User watcher.py and rsync to automate this. Read the following step by step instructions here:
http://kushellig.de/linux-file-auto-sync-directories/
I used to have the same setup under Windows as you, that is a local filetree (versioned) and a test environment on a remote server, which I kept mirrored in realtime with WinSCP. When I switched to Mac I had to do quite some digging before I was happy, but finally ended up using:
SmartSVN as my subversion client
Sublime Text 2 as my editor (already used it on Windows)
SFTP-plugin to ST2 which handles the uploading on save (sorry, can't post more than 2 links)
I can really recommend this setup, hope it helps!
I have been using WinSCP on Wine for a few years now and it works fine for the syncing operations you mention.
Here are some instructions I posted to Github on how to setup via wine: WinSCP_On_Wine
Just be aware that WinSCP is not being actively tested on wine so there may be some quirky issues. however, I use it daily on Ubuntu 20.04 for all my devops and have never lost a file and rarely experience any of such quirks.
You can also use Fetch as an SFTP client, and then edit files directly on the server from within that. There are also SSHFS (mount an ssh folder as a Volume) options. This is in line with what stimms said - are you sure you want stuff kept in sync, or just want to edit files on the server?
OS X has it's own file notifications system - this is what Spotlight is based upon. I haven't heard of any program that uses this to then keep things in sync, but it's certainly conceivable.
I personally use RCS for this type of thing:- whilst it's got a manual aspect, it's unlikely I want to push something to even the test server from my dev machine without testing it first. And if I am working on a development server, then I use one of the options given above.
Well, I had the same kind of problem and it is possible using these together: rsync, SSH Passwordless Login, Watchdog (a Python sync utility) and Terminal Notifier (an OS X notification utility made with Ruby. Not needed, but helps to know when the sync has finished).
I created the key to Passwordless Login using this tutorial from Dreamhost wiki: http://cl.ly/MIw5
1.1. When you finish, test if everything is ok… if you can't Passwordless Login, maybe you have to try afp mount. Dreamhost (where my site is) does not allow afp mount, but allows Passwordless Login. In terminal, type:
ssh username#host.com
You should login without passwords being asked :P
I installed the Terminal Notifier from the Github page: http://cl.ly/MJ5x
2.1. I used the Gem installer command. In Terminal, type:
gem install terminal-notifier
2.3. Test if the notification works.In Terminal, type:
terminal-notifier -message "Starting sync"
Create a sh script to test the rsync + notification. Save it anywhere you like, with the name you like. In this example, I'll call it ~/Scripts/sync.sh I used the ".sh extension, but I don't know if its needed.
#!/bin/bash
terminal-notifier -message "Starting sync"
rsync -azP ~/Sites/folder/ user#host.com:site_folder/
terminal-notifier -message "Sync has finished"
3.1. Remember to give execution permission to this sh script. In Terminal, type:
sudo chmod 777 ~/Scripts/sync.sh
3.2. Run the script and verify if the messages are displayed correctly and the rsync actually sync your local folder with the remote folder.
Finally, I downloaded and installed Watchdog from the Github page: http://cl.ly/MJfb
4.1. First, I installed the libyaml dependency using Brew (there are lot's of help how to install Brew - like an "aptitude" for OS X). In Terminal, type:
brew install libyaml
4.2. Then, I used the "easy_install command". Go the folder of Watchdog, and type in Terminal:
easy_install watchdog
Now, everything is installed! Go the folder you want to be synced, change this code to your needs, and type in Terminal:
watchmedo shell-command
--patterns="*.php;*.txt;*.js;*.css" \
--recursive \
--command='~/Scripts/Sync.sh' \
.
It has to be EXACTLY this way, with the slashes and line breaks, so you'll have to copy these lines to a text editor, change the script, paste in terminal and press return.
I tried without the line breaks, and it doesn't work!
In my Mac, I always get an error, but it doesn't seem to affect anything:
/Library/Python/2.7/site-packages/argh-0.22.0-py2.7.egg/argh/completion.py:84: UserWarning: Bash completion not available. Install argcomplete.
Now, made some changes in a file inside the folder, and watch the magic!
I'm using this little Ruby-Script:
#!/usr/bin/env ruby
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Rsyncs 2Folders
#
# watchAndSync by Mike Mitterer, 2014 <http://www.MikeMitterer.at>
# with credit to Brett Terpstra <http://brettterpstra.com>
# and Carlo Zottmann <https://github.com/carlo/haml-sass-file-watcher>
# Found link on: http://brettterpstra.com/2011/03/07/watch-for-file-changes-and-refresh-your-browser-automatically/
#
trap("SIGINT") { exit }
if ARGV.length < 2
puts "Usage: #{$0} watch_folder sync_folder"
puts "Example: #{$0} web keepInSync"
exit
end
dev_extension = 'dev'
filetypes = ['css','html','htm','less','js', 'dart']
watch_folder = ARGV[0]
sync_folder = ARGV[1]
puts "Watching #{watch_folder} and subfolders for changes in project files..."
puts "Syncing with #{sync_folder}..."
while true do
files = []
filetypes.each {|type|
files += Dir.glob( File.join( watch_folder, "**", "*.#{type}" ) )
}
new_hash = files.collect {|f| [ f, File.stat(f).mtime.to_i ] }
hash ||= new_hash
diff_hash = new_hash - hash
unless diff_hash.empty?
hash = new_hash
diff_hash.each do |df|
puts "Detected change in #{df[0]}, syncing..."
system("rsync -avzh #{watch_folder} #{sync_folder}")
end
end
sleep 1
end
Adapt it for your needs!
If you are developing python on remote server, Pycharm may be a good choice to you. You can synchronize your remote files with your local files utilizing pycharm remote development feature. The guide link as:
https://www.jetbrains.com/help/pycharm/creating-a-remote-server-configuration.html