I have a large list of active files in my Git repository. One change I made was deleting a large number of images across a number of directories. I want to commit that change immediately but I don't want to include all active files and I don't want to manually type out git rm myfile.png for every single image.
So essentially what I want to do is run git rm on all active files ending in .png. I'm trying to accomplish this by piping the results of git status into git rm but I'm having trouble isolating the file name and getting this to work as I'd like.
Is this a proper use of piping and if so what syntax do I need?
Any help is appreciated, thanks.
If you already removed the files, you can type:
git add -u
And they will be removed from git repository.
From git help add:
-u, --update
Only match <filepattern> against already tracked files in the index
rather than the working tree. That means that it will never stage
new files, but that it will stage modified new contents of tracked
files and that it will remove files from the index if the
corresponding files in the working tree have been removed.
Related
I'm writing a Node script that create a git stash dynamically.
The purpose is to create a single stash entry with some files in it, while leaving all the other changes unchanged in the working tree.
All the files to be stashed are stored in a JS Array. Beware it can contains hundreds of files.
I can't call git stash save -- <file1> for each file as it will create N stash entries. git stash save -- <file1> <file2> <file3> etc... may be too long a for single command, given the number of files to handle.
How can I make a script executing multiple git commands to create step by step a single stash entry ?
... Or there is another solution I didn't expect ?
Thank you !
I had an awful list of old stashes
I have first removed the very old ones
git reflog expire --expire-unreachable=7.days refs/stash
I have one huge stash left, which contains many stashed changes. Some are to keep some other would damage my production system. I went through the diff
git diff stash#{0}^1 stash#{0}
and I know which ones are to keep
I could do
git checkout --patch stash#{0} -- myfilename
to unstash changes on myfilename and is working fine.
However, I have a large folder with many files with stashed changes inside. I would like to apply all of them but only within that subfolder.
I have tried to approach it using wildcards in ksh but I does not work
git checkout --patch stash#{0} -- myfolder/*
results in
error pathspec [...] did not match any files known to git
The solution does not need to be git based, can be a shell script to wrap git calls
Have you tried :
git checkout --patch stash#{0} -- myfolder
without the ending * ?
Chances are your shell expands myfolder/* before executing the git command, and lists the elements which currently exist on disk, which is probably not what you want.
I wanted to "clean" my git repo before pushing by removing every JPG file, so I entered:
find . | xargs rm *.png
in the git root and now everything is delted. Also my *.py files are deleted, but I do not know why? It is a Linux, Ubuntu machine. Is there any chance to recover my files? Maybe over my OS?
The command you typed is plain wrong:
find .
This command outputs the name of every file and directory below ., including hidden files.
xargs
This command takes its input and runs the command given as its argument, feeding it one line at a time as an argument. Therefore, it will run rm *.png <line1_from_find>, then rm *.png <line2_from_find>, etc.
There is no safeguard like stop on errors, so if you let the command run completely, it unlinked all files and you know have an empty directory tree. And git will not help you, because it works by storing its metadata and current state within a .git directory at the root of the working directory. In which you just deleted all files.
So no, unless you made a copy, either manually or by pushing you state to some other place, it's probably gone, but see below. For future reference, here is the correct command to destroy all files ending in png:
find . -name '*.png' -delete
Optionnaly add -type f before the -delete if you may have directories ending in .png, to filter them out.
Okay, what now: it happens that git marks some of its internal state as read-only, which rm honors if you didn't use rm -f and you might be able to recover some data from that. Go to the .git directory at your working directory's root. It will contain a objects directory, and some files may have survived there.
Those files are raw compressed streams, you can see their content using that command:
zlib-flate -uncompress <path_to_the_file
(the zlib-flate command comes from qpdf package on ubuntu)
for me the following worked:
$ git checkout -- name_of_deleted_directory
or
$ git checkout -- path_to_local_delected_directory
In my case, I deleted a directory by mistake and I didn't change any code in that directory but I had changed code in other directories of my git repo.
I was on version 100, with local changes.
I did an svn up to reach HEAD (which is revision 200). Then I was ill adviced to revert back to revision 150, with my local changes, in command: svn merge -r HEAD:150 .
Now I actually want to go back to revision 200 with my local changes. svn up doesn't do anything, because I appear to still have file missings. I know because a file A.cpp was in revision 200 but not in my local working copy.
If I do svn status, I see a bizzare "D" in front of A.cpp. they seem to think I want to delete this file I don't even own.
What state am I in now, and how do I fix it?
In brief, your current checked out repo is messed up - it has a combination of your changes as well as a set of uncommitted changes to go back from HEAD -> r150. If you committed at this point, it would have the effect of removing all the changes that happened from 150:HEAD, and then adding in your changes.
If trying to do a re-merge: svn merge -r 150:HEAD . doesn't work (and generally it won't), then I would suggest the following:
assuming you have your current workspace <currws>
checkout a second copy of the workspace, at revision 150: svn co -r 150 <svn url> <newws>. This will give you a directory <newws>
(cd <currws>; tar cf - --exclude .svn .) | (cd <newws>; tar xf -). This will take all the files & directories from <currws> and copy them into <newws>.
Take inventory of the new directory - it should now contain copies of only your changes - some of these may need to be SVN added to the workspace; or if you have deletes, they will need to be re-deleted on the <newws>. You can pre-remove all the files/folders from new-ws prior to the tar, and anything that appears after the tar with a ! indicates a file that you removed with your changes, anything with a ? is a file that needs adding, and the remainder should be M entries.
Bring the new workspace up to HEAD - svn up <newws> should work in this case.
verify that everything's working and that it only contains your changes.
make a patch file, get it code reviewed and then commit it.
I'm pretty sure this will get you back on-track; although I don't have a tree to check against with the spotty network connectivity I have.
I am experimenting some linux configuration and I want to track my changes? Of course I don't want to to put my whole OS under version control?
Is there a way (with git, mercurial or any VCS) to track the change without storing the whole OS?
This is what I imagine:
I do a kind of git init -> all hashes of all files are stored, but not the content of the files
I make some changes to my file system -> git detect that the hash of this file has changed
I commit -> the content of the file is stored (or even better the original file and the diff are stored! I know, that is impossible... )
Possible? Impossible? Work-arounds?
EDIT: What I care about is just to minimize the size of the repository and to have a repository containing only my changes. Having all files in my repository is not relevant for me. For example if i push to github I just want it to contain only the files that has changed.
Take a look at etckeeper, it will probably do the job.
What you want is git update-index --info-only or ... --index-info, from the man page: " --info-only is used to register files without placing them in the object database. This is useful for status-only repositories.". --index-info is its industrial-scale cousin.
Do that with the files you want to track, write-tree to write the index structure into the object db, commit-tree that, and update-ref to update a branch.
To get the object name use git hash-objectfilename.
Here is what we do...
su -
cd /etc
echo "*.cache" > .gitignore
git init
chmod 700 .git
cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Then setup crontab:
su -
crontab -e
# Put the following in:
0 3 * * * cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Now you will have a nightly commit of all changes in /etc
If you want to track more than /etc in one repo, then you could simply do it at the root of your filesystem, except add the proper ignore paths to your /.gitignore. I am unclear on the effects of having git within git, so you might want to be extra careful in that case.
I know this question is old, but I thought this might help someone. Inspired by #Jonathon's comment on the How to record concrete modification of specific files question, I have created a shell script that enables you to monitors all the changes done on a specific file, while keeping all the changes history. the script depends on the inotifywait and git packages being installed.
You can find the script here
https://github.com/hisham-hassan/linux-file-monitor
Usage: file-monitor.sh [-f|--file] <absolute-file-path> [-m|--monitor|-h|--history]
file-monitor.sh --help
-f,--file <absolute-file-path> Adding a file to the monitored files List. The <absolute-file-path>
is the absolute file path of the file we need to action.
PLEASE NOTE: Relative file path could cause issues in the script,
please make sure to use the abolute path of the file. also try to
avoid sym links, as it has not been tested.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-m, --monitor Monitoring all the changes on the file. the monitoring will keep
happening as long as the script is running; you may need to run it
in the background.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-h, --history showing the full history of the file.
To exit, press "q"
example: file-monitor.sh -f /absolute/path/to/file/test.txt -h
--uninstall uninstalls the script from the bin direcotry,
and removes the monitoring history.
--install Adds the script to the bin directory, and creates
the directories and files needed for monitoring.
--help Prints this help message.