I am reading other threads with similar titles but seems not related to me. I have a git repo in my local ubuntu box. The files I have is mysql dump I split into 2mb files each. When I git gc, this is the output:
git#pc:~/repos/x$ git gc
Counting objects: 17244, done.
Delta compression using up to 4 threads.
Killedssing objects: 90% (2491/2767)
error: failed to run repack
Is there a way to trace what is causing? I already tried
git config --global pack.windowMemory "20m"
git config --global pack.packSizeLimit "20m"
git config --global pack.threads "1"
But this is the output
git#pc:~/repos/x$ git gc
Counting objects: 17244, done.
Killedssing objects: 3% (88/2767)
error: failed to run repack
Edit
This worked for me:
git config --global pack.windowMemory "20m"
git config --global pack.packSizeLimit "20m"
git config --global pack.threads "4"
git config --global pack.window "2"
git config --global pack.depth "10"
It seems because I only have 256 ram vps. and other process already eating 100mb. I will try also to upgrade my vps because this config makes git gc and git clone very slow.
Edit Again
After more investigation, it is because of the threads. I have 4 virtual cores assigned to my vps, and when I put the pack.threads "2" without the others, the problem went away.
I asked my provider to give me more ram but that did not help. Only when I configure the threads. Hope this help others
From your output, it looks like the process is being killed by an external process such as the Linux OOM Killer, or some other resource management process, during the delta compression stage. If you're running out of memory or disk space, then that gives you a logical place to start your investigation.
You might also consider running your process under strace. This will often tell you what the program was doing when it receives the signal, but may not always tell you who the signal sender actually was.
If strace fails you, the Git source currently has 12 references to sigaction(2), which you might be able to leverage to determine the signal sender by examining the siginfo_t struct. This would most likely require a core dump or an interactive debugger such as gdb.
Related
I am trying to install Homebrew on Linux on a cluster. But I am constantly getting the following error messages:
fatal: packfile .git/objects/pack/pack-9d2d97f367d3ebfa65a3b708b2d87333a8eb2bf0.pack cannot be mapped: Cannot allocate memory
error: https://github.com/Homebrew/homebrew-core did not send all necessary objects
Failed during: git fetch --force origin refs/heads/master:refs/remotes/origin/master
I then created a .git repository using the git init command on the .linuxbrew folder.
Can you please tell me how can I solve this problem?
Thank you.
From the message it looks like git is trying to memory map the entire given file and running out of physical memory. You might want to add more virtual memory to your sistem. It will be slower but there is a chance it will proceed forward.
https://askubuntu.com/questions/33697/how-do-i-add-swap-after-system-installation
I recently updated the development server that hosts our code repos to a newer version of Ubuntu (18.04). As part of the process git was upgraded to version 2.23.0. The actual application servers where the code gets deployed to need to be able to checkout the latest changes from the git repos. When I try to do a 'git fetch' on those servers I get a long list of errors that look like this:
error: index file
./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx is
too small
I did some digging around to see if I could come up with a solution but so far noting has worked. I tried the answers listed here: git error: "index file is too small" .
Neither git index-pack nor git repack -a -d solved the issue. I even tried deleting the local copy of the files from the application server and installing fresh using git clone. The clone itself threw a bunch of errors similar to before
remote: error: index file
./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx is
too small
At this point I'm out of ideas. Any help would be appreciated.
Edit: The output of du -h suggests that there is enough disk space.
The error message sounds like file corruption. If you have not run out of disk space, you can delete the index file and recreate it with:
git index-pack -v ./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx
You might also want to run use git-fsck to
verify the connectivity and validity of the objects in the GIT database -- both the remote the local one.
If your index is corrupt, you can also try to reset the branch which will create a new index file:
To be safe, backup .git/index.
Remove the index file .git/index.
Perform git reset
References
The issue is a possible duplicate of git error: "index file is too small"
Documentation on git index-pack can be found at https://git-scm.com/docs/git-index-pack
Some notes on repairing a broken index: https://makandracards.com/makandra/5899-how-to-fix-a-corrupt-git-index
fatal: packfile name 'server' does not end with '.pack'
I encounter this error when transfer my git repo from Mac OS to another system. Files start with '._' are Mac OS meta files generated by tar command. So look at this question to avoid '._*' files: Tar command in mac os x adding "hidden" files, why?
I've been installing packages on my VM, (python / dev 3.6 oriented especially), and it seems I corrupted some setup, so now I get the following errors:
git pull --rebase
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove .git/gc.log.
Automatic cleanup will not be performed until the file is removed.
error: Could not stat '.git/objects/4f/6716241438e21094af08213c05290a34cffdd7'
error: Could not stat '.git/objects/4f/abf345fc90d14f6f0026cf91bcc4c2fd5c58b8'
and lot more of them
could you suggest how to fix it?
"git gc" solved the problem (see git docs)
I ran git pull on my Ubuntu VM, and I got
Enter passphrase for key '/root/.ssh/id_rsa':
remote: Counting objects: 7, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 7 (delta 4), reused 0 (delta 0)
Unpacking objects: 100% (7/7), done.
From bitbucket.org:bheng/app
7406b4e..8f5e3dc master -> origin/master
Auto packing the repository for optimum performance. You may also
run "git gc" manually. See "git help gc" for more information.
Counting objects: 5384, done.
fatal: Out of memory, malloc failed (tried to allocate 306851376 bytes)
error: failed to run repack
Updating 7406b4e..8f5e3dc
Fast-forward
Auto packing the repository for optimum performance. You may also
run "git gc" manually. See "git help gc" for more information.
Counting objects: 5384, done.
fatal: Out of memory, malloc failed (tried to allocate 306851376 bytes)
error: failed to run repack
resources/views/layouts/fe/meta.blade.php | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
As you can see at the end - it pull in my local changes fine.
But I just want to get rid of those errors, how do I get rid of them? Clear my cache?
I saw git gc is the recommended command to clean local files, but I can't even run that.
Counting objects: 7709, done.
fatal: Out of memory, malloc failed (tried to allocate 306851376 bytes)
error: failed to run repack
I also tried
git fsck
Checking object directories: 100% (256/256), done.
Checking objects: 100% (2377/2377), done.
dangling commit 32c8482dcc118bd64c1f8e45ee093c286a840e5d
dangling commit abce13f3bfb8c3833c7b76f4aea9cd28caa3af03
dangling blob c9131033c721c0b90c86bf7b32377588354363ec
dangling commit de6d83d883d35f97cb211233e2283c9d79e6f922
dangling blob d535c98d611cca74af45633c77680bb2bdc5e55a
Finally, run git pull one more time got this
Auto packing the repository for optimum performance. You may also
run "git gc" manually. See "git help gc" for more information.
Counting objects: 5384, done.
fatal: Out of memory, malloc failed (tried to allocate 306851376 bytes)
error: failed to run repack
Already up-to-date.
Well, your system is simply out of RAM. Git regularly runs git gc if needed and that is where the error message comes from as you also can see when running it manually. It tries to repack your Git repository and it fails when trying to allocate 293 MiB additional RAM. Put more RAM into your box or at least enlarge the swap size, even if only temporary to make the repack finish successfully. You can easily add some swap file while the system is running if you only want to make this temporarily to help the repack to finish. Documentation can be found at https://help.ubuntu.com/community/SwapFaq#Four-step_Process_to_Add_Swap_File without doing the last step if only temporary.
Your RAM size may will be lower, you have to Add Swap Space.
I had done it in ubuntu 16.04 and solved this issue
The below code displays if swap
free -h
if swap is zero then check there is enough memory to add the swap
df -h
My RAM is 1 GB so allocating 1GB to swap space also
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
This will display something like this, then success
NAME TYPE SIZE USED PRIO
/swapfile file 1024M 0B -1
Thanks
Try following the instructions here: https://stackoverflow.com/a/8761464/1278288
git config --global pack.windowMemory "100m"
git config --global pack.packSizeLimit "100m"
git config --global pack.threads "1"
It should reduce the memory needed.
It's a good idea to have at least 1gb of swap though.
I'm encountering an issue with Git where I'm receiving the following message:
> git fetch
error: cannot create pipe for ssh: Too many open files
fatal: unable to fork
The System Administrators have increased my file limit, but it has not corrected the issue. Additionally, I don't have an issue with creating new files with vi.
When trying to push a new branch, I get a similar message:
git push origin test_this_broken_git
error: cannot create pipe: Too many open files
fatal: send-pack: unable to fork off sideband demultiplexer
Could somebody answer exactly why this is happening? I have not made any recent changes to my git config and have verified that manually.
There are two similar error messages:
EMFILE: Too many open files
ENFILE: Too many open files in system
It looks like you're getting EMFILE, which means that the number of files for an individual process is being exceeded. So, checking whether vi can open files is irrelevant—vi will use its own, separate file table. Check your limits with:
$ ulimit -n
1024
So on my system, there is a limit of 1024 open files in a single process. You shouldn't need to ask your system administrator (please don't use the acronym SA, it's too opaque; if you must abbreviate, use "sysadmin") to raise the limit.
You may wish to check which files Git opens by running Git under strace.
This could be a bug in Git or in a library, or it could be you're using an old version of something, or it could be something more bizarre. Try strace first to see which files it opens, and check whether Git closes those files.
Update from Hazok:
After using the above recommendations, it turns out the error was caused by too many loose objects. There were too many loose objects because git gc wasn't being run often enough.
Why did this happen?
From the git documentation:
When there are approximately more than this many loose objects in the repository, git gc --auto will pack them. Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is 6700.
Here "Some Porcelain commands" includes git push, git fetch etc. So if the max open files limit ulimit -n < 6700, you'll be eventually blocked by git gc --auto once you got ~6700 loose objects in a single git repo.
I'm in a hurry. How to fix it?
If you have sufficient permissions to adjust the system ulimit:
$ sudo ulimit -n 8192
Otherwise, you may disable git gc by setting git config gc.auto 0, so that you could push your local commits to the remote, delete the repo, and clone it back without thousands of loose objects.
How can we prevent this from happening again?
Set git config --global gc.auto 200, where 200 is some value less than your max open files limit. If you picked a too small value, git gc would run too frequently, so choose wisely.
If you set gc.auto=0, the loose objects will never be packed unless you run git gc manually. So there could be hundreds of thousands of files accumulated in the same directory, which might be a problem, especially for mechanical hard drive or Windows users. (See also: How many files in a directory is too many? and Is it OK (performance-wise) to have hundreds or thousands of files in the same Linux directory?).