I would like a local GIT is my home directory to implement autosave to the repository that happens every five minutes.
I have two Questions:
Is this s sane thing to do?
How does one go about writing a script that implements this functionality for a specified set of directories in the home directory on linux?
The aim is to capture all the histories all the important files in my home directory automatically without any input from me. I can use this whenever I screw-up.
Sanity is all relative!
I guess it depends on why you are backing up. If it's for hardware failure, then this won't work because the repository is in the same folder (/home/) so if the folder goes, the repo goes. Unless of course you are pushing it to a storage repo on another machine somewhere as the actual backup.
We do use git to store important things, especially research papers and PDF's, so we can easily share them.
You would write a cron job that runs a script every so often. Basically you would write a simple bash script that does a git commit -a -m "commit message" periodically in your folder. The tricky part is doing the git add on the new files that were created so they are tracked. You will likely need to do a git status and parse the output from it in your script to find the new files, then git add that list. Python may be the easiest way to do that. Then you register that with cron.
Google is your friend here, there are plenty of examples on how to register scripts with cron.
Write a shell script that would enter each directory you want and run
git add .
git commit -m "new change"
git push
and then use cron to run the script each 5 minutes.
Write a shell script to do the following
1) git status --u=no //It gives you the files which are modified
2) Iterate through the file list from step 1 and do git add <file>
3) git commit -m "latest change <date:time>"
Schedule this script in cron.
Related
I have the problem with macOS mojave, but I guess it generalizes to all bash environment. In the .bashrc or .profile, I add one line as:
alias gc="git add .;git commit --message="$(date +"iMac_%D_%T")""
My purpose is to send the current system time as a message when commiting a change by typing gc. However, the system time was read when alias was invoked (here is when I log in the system).
Can anyone help me out? Thank you in advance!
The simpler approach is to make this a shell function and not an alias at all:
gc() {
git add . && git commit --message="$(date +"iMac_%D_%T")" "$#"
}
That said, as a matter of good git hygeine, I strongly advise against doing this; you'll get output files and temporary files you don't want checked in. git commit -a, by not adding new files, is somewhat safer -- though using git add -p to review changes hunk-by-hunk is by far the best practice to avoid mixing unrelated and unwanted changes into your commits.
I need to execute several SVN update processes in background under same folder structure as I have many subfolders and want to speed up whole thing.
I have folder structure like this:
/folder/subfolder1/
/folder/subfolder2/
/folder/subfolder3/
...
/folder/subfolder1000/
I'm trying to do something like this in bash script:
svn up /folder/subfolder1 &
svn up /folder/subfolder2 &
svn up /folder/subfolder3 &
Problem is in that SVN complains that '/folder' is locked and only first task finish correct, other two don't and got error message like this:
svn: Working copy '/folder' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
Is there a way to accomplish task on this way with several parallel SVN process because doing one by one folder (selected by some other process) takes a lot time to finish?
P.S: I'm doing all this in higher programing language (PHP-CLI) but for simplicity of question I write it as bash script (got same problem).
In older versions of Subversion every subdirectory had its own .svn file. If your svn is old enough, and you run the commands in the subdirectories, then I think that won't lock the parent directory and the commands can succeed in parallel. Like this:
(cd /folder/subfolder1; svn up)&
(cd /folder/subfolder2; svn up)&
(cd /folder/subfolder3; svn up)&
Although, to be honest, I thought your original command should work too, I don't see why the parent directory gets locked.
I've got a git workflow set up similar to this http://joemaller.com/990/a-web-focused-git-workflow/. Essentially i have local repositories that report to a remote repository that is bare. Then I have my deployment directory accessible via web also set up as a repository that also reports to the same bare repository.
It's set up with git hooks so that when local developer pushes changes to remote repository, hook goes into web folder and pulls from the repository so it always has the latest and greatest. All works pretty good.
My crux is that I'm looking to appease to the people who don't want to you GIT and just want to upload files to the web folder via FTP. I've kinda got this working by setting up an inotifywait monitor on the web folder for whenever files are written, modified, moved, deleted, created, etc... my bash script for this is as follows.
!/bin/sh
inotifywait #*.swp -rm -e modify,move,create,delete,delete_self,unmount /var/www/html/mysite | while read
do
now=$(date +"%m_%d_%Y:%T")
echo $now >> temp.txt
cd /var/www/html/mysite || exit
git add --all
git commit -m "ftp update $now" -a
done
This too actually works, but what I'm observing is that I'm stuck in the while loop once I trigger the inotifywait by modifying a file in my web folder. Anyone have any ideas on this? Ideally would love it to do it's thing and not be stuck in the while loop continuously running unnecessary git commands.
Thanks!
The man page for inotifywait suggests that you do a different loop style:
while inotifywait -e modify /var/log/messages; do
…
done
have you tried that?
I am experimenting some linux configuration and I want to track my changes? Of course I don't want to to put my whole OS under version control?
Is there a way (with git, mercurial or any VCS) to track the change without storing the whole OS?
This is what I imagine:
I do a kind of git init -> all hashes of all files are stored, but not the content of the files
I make some changes to my file system -> git detect that the hash of this file has changed
I commit -> the content of the file is stored (or even better the original file and the diff are stored! I know, that is impossible... )
Possible? Impossible? Work-arounds?
EDIT: What I care about is just to minimize the size of the repository and to have a repository containing only my changes. Having all files in my repository is not relevant for me. For example if i push to github I just want it to contain only the files that has changed.
Take a look at etckeeper, it will probably do the job.
What you want is git update-index --info-only or ... --index-info, from the man page: " --info-only is used to register files without placing them in the object database. This is useful for status-only repositories.". --index-info is its industrial-scale cousin.
Do that with the files you want to track, write-tree to write the index structure into the object db, commit-tree that, and update-ref to update a branch.
To get the object name use git hash-objectfilename.
Here is what we do...
su -
cd /etc
echo "*.cache" > .gitignore
git init
chmod 700 .git
cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Then setup crontab:
su -
crontab -e
# Put the following in:
0 3 * * * cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Now you will have a nightly commit of all changes in /etc
If you want to track more than /etc in one repo, then you could simply do it at the root of your filesystem, except add the proper ignore paths to your /.gitignore. I am unclear on the effects of having git within git, so you might want to be extra careful in that case.
I know this question is old, but I thought this might help someone. Inspired by #Jonathon's comment on the How to record concrete modification of specific files question, I have created a shell script that enables you to monitors all the changes done on a specific file, while keeping all the changes history. the script depends on the inotifywait and git packages being installed.
You can find the script here
https://github.com/hisham-hassan/linux-file-monitor
Usage: file-monitor.sh [-f|--file] <absolute-file-path> [-m|--monitor|-h|--history]
file-monitor.sh --help
-f,--file <absolute-file-path> Adding a file to the monitored files List. The <absolute-file-path>
is the absolute file path of the file we need to action.
PLEASE NOTE: Relative file path could cause issues in the script,
please make sure to use the abolute path of the file. also try to
avoid sym links, as it has not been tested.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-m, --monitor Monitoring all the changes on the file. the monitoring will keep
happening as long as the script is running; you may need to run it
in the background.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-h, --history showing the full history of the file.
To exit, press "q"
example: file-monitor.sh -f /absolute/path/to/file/test.txt -h
--uninstall uninstalls the script from the bin direcotry,
and removes the monitoring history.
--install Adds the script to the bin directory, and creates
the directories and files needed for monitoring.
--help Prints this help message.
I have a git clone/repo on a development server, but I am now moving to another one. I don't want to commit all my local branches and changes to the main repository, so how can I make an exact copy of everything on oldserver to newserver?
I tried oldserver:~$ scp -rp project newserver:~/project
but then I just get loads and loads of "typechange" errors when trying to do anything on newserver.
Someone said something about x-modes, but how can I preserve that when moving files between servers?
If you want a git solution, you could try
git clone --mirror <oldurl> <newurl>
though this is only for bare repositories.
If this is a non-bare repo, you could also do the normal clone, followed by something like this:
git fetch origin
git branch -r | grep '^ *origin/[^ ]*$' |
while read rb; do git branch --no-track ${rb#*/} $rb; done
git remote rm origin
The middle step can of course be done in 5000 different ways, but that's one! (note that the continuation line \ isn't necessary after the pipe in bash - it knows it needs more input)
Finally, I'd suggest using rsync instead of scp (probably with -avz options?) if you want to directly copy. (What exactly are these typechange errors?)
I've actually done this, and all I did was tar the repo up first and scp it over. I would think that scp -rp would work as well.
"Typechange" would normally refer to things like a symlink becoming a file or vice-versa. Are the two servers running the same OS?
You may also want to try the simple dumb solution -- don't worry about how the typechanges got there, but let git fix them with a reset command:
git reset --hard HEAD
That only makes sense if (1) the problems all pertain to the checked-out files (and not the repository structure itself) and (2) you haven't made any changes on newserver which you need to preserve.
Given those caveats, it worked for me when I found myself with the same problem, and it doesn't require you to think about git's internals or how well your file-transfer process is preserving attributes.