Can git use patch/diff based storage? - linux

As I understand it, git stores full files of each revision committed. Even though it's compressed there's no way that can compete with, say, storing compressed patches against one original revision full file. It's especially an issue with poorly compressible binary files like images, etc.
Is there a way to make git use a patch/diff based backend for storing revisions?
I get why the main use case of git does it the way it does but I have a particular use case where I would like to use git if I could but it would take up too much space.
Thanks

Git does use diff based storage, silently and automatically, under the name "delta compression". It applies only to files that are "packed", and packs don't happen after every operation.
git-repack docs:
A pack is a collection of objects, individually compressed, with delta compression applied, stored in a single file, with an associated index file.
Git Internals - Packfiles:
You have two nearly identical 22K objects on your disk. Wouldn’t it be nice if Git could store one of them in full but then the second object only as the delta between it and the first?
It turns out that it can. The initial format in which Git saves objects on disk is called a “loose” object format. However, occasionally Git packs up several of these objects into a single binary file called a “packfile” in order to save space and be more efficient. Git does this if you have too many loose objects around, if you run the git gc command manually, or if you push to a remote server.
Later:
The really nice thing about this is that it can be repacked at any time. Git will occasionally repack your database automatically, always trying to save more space, but you can also manually repack at any time by running git gc by hand.
"The woes of git gc --aggressive" (Dan Farina), which describes that delta compression is a byproduct of object storage and not revision history:
Git does not use your standard per-file/per-commit forward and/or backward delta chains to derive files. Instead, it is legal to use any other stored version to derive another version. Contrast this to most version control systems where the only option is simply to compute the delta against the last version. The latter approach is so common probably because of a systematic tendency to couple the deltas to the revision history. In Git the development history is not in any way tied to these deltas (which are arranged to minimize space usage) and the history is instead imposed at a higher level of abstraction.
Later, quoting Linus, about the tendency of git gc --aggressive to throw out old good deltas and replace them with worse ones:
So the equivalent of "git gc --aggressive" - but done properly - is to
do (overnight) something like
git repack -a -d --depth=250 --window=250

Related

Managing large quantity of files between two systems

We have a large repository of files that we want to keep in sync between one central location and multiple remote locations. Currently, this is being done using rsync, but it's a slow process mainly because of how long it takes to determine the changes.
My current thought is to find a VCS-like solution where instead of having to check all of the files, we can check the diffs between revisions to determine what gets sent over the wire. My biggest concern, however, is that we'd have to re-sync all of the files that are currently in-sync, which is a significant effort. I've been told that the current repository is about .5 TB and consists of a variety of files of different sizes. I understand that an initial commit will most likely take a significant amount of time, but I'd rather avoid the syncing between clusters if possible.
One thing I did look at briefly is git-annex, but my first concern is that it may not like dealing with thousands of files. Also, one thing I didn't see is what would happen if the file already exists on both systems. If I create a repo using git-annex on the central system and then set up repos on the remote clusters, will pushing from central to a remote repo cause it to sync all of the files?
If anyone has alternative solutions/ideas, I'd love to see them.
Thanks.

Should I use git-lfs for packages info files?

As a developer working with several languages, I notice that in most modern languages, dependencies metadata files can change a lot.
For instance, in NodeJS (which in my opinion is the worst when it comes to package management), a change of dependencies or in the version of NPM (respectively yarn) version can lead to huge changes in package-lock.json (respectively yarn.lock), sometimes with tens of thousands of modified lines.
In Golang for instance, this would be go.sum which can have important changes (in smaller magnitude when compared to Node of course) when modifying dependencies or running go mod tidy at times.
Would it be more efficient to track these dependencies files with git-lfs? Is there a reason not to do it?
Even if they are text files, I know that it is advised to push SVG files with git-lfs, because they are mostly generated files and their diff has no reason to be small when regenerating them after a change.
Are there studies about what language and what size/age of a project that makes git-lfs become profitable?
Would it be more efficient to track these dependencies files with git-lfs?
Git does a pretty good job at compressing text files, so initially you probably wouldn't see much gains. If the file gets heavily modified often, then over time the total cloneable repo size would increase by less if you use Git LFS, but it may not be worth it compared to the total repo size increases, which could make the size savings negligible as a percentage. The primary use case for LFS is largish binary files that change often.
Is there a reason not to do it?
If you aren't already using Git LFS, I wouldn't recommend starting for this reason. Also, AFAIK there isn't native built in support for diffing versions of files stored in LFS, though workarounds exist. If you often find yourself diffing the files you are considering moving into LFS, the nominal storage size gain may not be worth it.

Is it possible to add the SHA of my current commit to the core file pattern?

I'm looking to add the git sha to the core file pattern so I know exactly which commit was used to generate the core file.
Is there a way to do this?
It's not clear to me what you mean by "the core file pattern". (In particular, when a process crashes and the Linux kernel generates a core dump, it uses kernel.core_pattern. This setting is system-wide, not per-process. There is a way to run an auxiliary program—see How to change core pattern only for a particular application?—but that only gets you so far; you still have to write that program. See also https://wiki.ubuntu.com/Apport.) But there is a general problem here, which has some hacky solutions, all of which are variants on a pretty obvious method that is still a little bit clever.
The general problem
The hash of the commit you are about to make is not known until after you have made it. Worse, even if you can compute the hash of the commit you are about to make—which you can, it's just difficult—if you then change the content of some committed file that will go into the commit, so as to include this hash, you change the content of the commit you do make which means that you get a different actual commit hash.
In short, it is impossible to commit the commit hash of the commit inside the commit.
The hacky solution
The general idea is to write an untracked file that you use in your build process, so that the binary contains the commit hash somewhere easily found. For projects built with Make, see how to include git commit-number into a c++ executable? for some methods.
The same kind of approach can be used when building tarballs. Git has the ability to embed the hash ID of a file (blob object) inside a work-tree file, using the ident filter, but this is the ID of the file, which is usually not useful. So, instead, if you use git archive to produce tar or zip files, you can use export-subst, as described in the gitattributes documentation and referred-to in the git archive documentation. Note that the tar or zip archive also holds the commit hash ID directly.
Last, you can write your own custom smudge filter that embeds a commit hash ID into a work-tree file. This might be useful in languages where there is no equivalent of an external make process run to produce the binary. The problem here is that when the smudge filter reads HEAD, it's set to the value before the git checkout finishes, rather than the value after it finishes. This makes it much too difficult to extract the correct commit hash ID (if there is even a correct one—note that git describe will append -dirty if directed, to indicate that the work-tree does not match the HEAD commit, when appropriate).

Perforce: How does files get stored with branching?

A very basic question about branching and duplicating resources, I have had discussion like this due to the size of our main branch, but put aside it is great to know how this really works.
Consider the problem of branching dozens of Gb.
What happens when you create a branch of this massive amount of information?
Am reading the official doc here and here, but am still confused on how the files are stored for each branch on the server.
Say a file A.txt exists in main branch.
When creating the branch (Xbranch) and considering A.txt won't have changes, will the perforce server duplicate the A.txt (one keeping the main changes and another for the Xbranch)?
For a massive amount of data, it becomes a matter because it will mean duplicate the dozens of Gb. So how does this really work?
Some notes in addition to Bryan Pendleton's answer (and the questions from it)
To really check your understanding of what is going on, it is good to try with a test repository with a small number of files and to create checkpoints after each major action and then compare the checkpoints to see what actual database rows were written (as well as having a look at the archive files that the server maintains). This is very quick and easy to setup. You will notice that every branched file generates records in db.integed, db.rev, db.revcx and db.revhx - let alone any in db.have.
You also need to be aware of which server version you are using as the behavior has been enhanced over time. Check the output of "p4 help obliterate":
Obliterate is aware of lazy copies made when 'p4 integrate' creates
a branch, and does not remove copies that are still in use. Because
of this, obliterating files does not guarantee that the corresponding
files in the archive will be removed.
Some other points:
The default flags for "p4 integrate" to create branches copied the files down to the client workspace and then copied them back to the server with the submit. This took time depending on how many and how big the files were. It has long been possible to avoid this using the -v (virtual) flag, which just creates the appropriate rows on the server and avoids updating the client workspace - usually hugely faster. The possible slight downside is you have to sync the files afterwards to work on them.
Newer releases of Perforce have the "p4 populate" command which does the same as an "integrate -v" but also does not actually require the target files to be mapped into the current client workspace - this avoids the dreaded "no target file(s) in client view" error which many beginners have struggled with! [In P4V this is the "Branch files..." command on right click menu, rather than "Merge/Integrate..."]
Streams has made branching a lot slicker and easier in many ways - well worth reading up on and playing with (the only potential fly in the ointment is a flat 2 level naming hierarchy, and also potential challenges in migrating existing branches with existing relationships into streams)
Task streams are pretty nifty and save lots of space on the server
Obliterate has had an interesting flag -b for a few releases which is like being able to quickly and easily remove unchanged branch files - so like retro-creating a task stream. Can potentially save millions of database rows in larger installations with lots of branching
In general, branching a file does not create a copy of the file's contents; instead, the Perforce server just writes an additional database record describing the new revision, but shares the single copy of the file's contents.
Perforce refers to these as "lazy copies"; you can learn more about them here: http://answers.perforce.com/articles/KB_Article/How-to-Identify-a-Lazy-Copy-of-a-File
One exception is if you use the "+S" filetype modifier, as in this case each branch will have its own copy of the content, so that the +S semantics can be performed properly on each branch independently.

Number of threads for git gc depending on repo size

Can I use single-threaded compression in Git for large repositories and usual parallelized one for small ones? Like "pack.threads=1" if don't easily fit in momory and "pack.threads=4" otherwise.
As I heart somewhere, multithreaded "git gc" requires a lot memory and thrashes (or just fails) longer that singlethreaded.
I want it to work fast for small repos and don't fail on big repos.
You can configure pack.threads per repository, but I doubt that there is a setting to do this automatically depending on the size of the repository.

Resources