Zephir build throws "exhausted allowed memory size" on newly cloned Phalcon repo - memory-leaks

General
software: MacOS
Phalcon: 5.0.x
PHP: 8.1
Zephir: 0.16.0
brew: phalcon#4.1.0
Location: ~/Documents/cphalcon
Details
I have just cloned phalcon following the instructions here.
I have already installed zhephir_phar and set up zephir.phar to make it executable.
Then I cloned the repo and ran these:
cd cphalcon/
git checkout tags/v5.0.0 ./
zephir fullclean
zephir build
However the second command throws
error: pathspec 'tags/v5.0.0' did not match any file(s) known to git
(and there is indeed no folder named tags)
As for the fourth command, (zephir build) throws:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in phar:///usr/local/bin/zephir/Library/Statements/ForStatement.php on line 631
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in phar:///usr/local/bin/zephir/Library/Statements/ForStatement.php on line 631
This is a newly cloned repo, and I have not yet changed anything.
Any clues as to what is throwing this error?
Update
(I removed the update, as it did not have any connection to the fix for this issue)

These are the commands that I used to install the latest Phalcon 5.0.0RC3:
cd /usr/local/lib
git clone -b 5.0.x https://github.com/phalcon/cphalcon
cd cphalcon
zephir fullclean
zephir build
phpenmod phalcon
Do keep in mind that Phalcon five is still a release candidate and has not been finalized yet. The Phalcon team hopes to have the final release out by the end of August.

fix
I tried the following steps, and it worked!
I downloaded the repo from here (cloning the repo will probably work fine too): https://github.com/phalcon/cphalcon/releases/tag/v5.0.0RC4
I ran the following (note that zephir compile will take a long time):
cd cphalcon/
zephir fullclean
zephir compile
cd ext
phpize
./configure
make && make install
cd ..
zephir install
And added the extension="phalcon.so" inside the php.ini file.
I restarted the webserver:
sudo apachectl restart
Finally I created the files public/index.php and .htrouter.php as instructed here, and started the webserver:
$(which php) -S localhost:8000 -t public .htrouter.php
more
It seems the only issue was with zephir generate. The framework seemed to work perfectly fine so long ad I skipped that command (zephir build ran zephir generate as well).
But this might be more of a way to avoid the issue rather than fixing it...

Related

Resolving Errors With Git Index Too Small

I recently updated the development server that hosts our code repos to a newer version of Ubuntu (18.04). As part of the process git was upgraded to version 2.23.0. The actual application servers where the code gets deployed to need to be able to checkout the latest changes from the git repos. When I try to do a 'git fetch' on those servers I get a long list of errors that look like this:
error: index file
./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx is
too small
I did some digging around to see if I could come up with a solution but so far noting has worked. I tried the answers listed here: git error: "index file is too small" .
Neither git index-pack nor git repack -a -d solved the issue. I even tried deleting the local copy of the files from the application server and installing fresh using git clone. The clone itself threw a bunch of errors similar to before
remote: error: index file
./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx is
too small
At this point I'm out of ideas. Any help would be appreciated.
Edit: The output of du -h suggests that there is enough disk space.
The error message sounds like file corruption. If you have not run out of disk space, you can delete the index file and recreate it with:
git index-pack -v ./objects/pack/._pack-5b58f700fea57ee6f8ff29514a376b945bb1c8a9.idx
You might also want to run use git-fsck to
verify the connectivity and validity of the objects in the GIT database -- both the remote the local one.
If your index is corrupt, you can also try to reset the branch which will create a new index file:
To be safe, backup .git/index.
Remove the index file .git/index.
Perform git reset
References
The issue is a possible duplicate of git error: "index file is too small"
Documentation on git index-pack can be found at https://git-scm.com/docs/git-index-pack
Some notes on repairing a broken index: https://makandracards.com/makandra/5899-how-to-fix-a-corrupt-git-index
fatal: packfile name 'server' does not end with '.pack'
I encounter this error when transfer my git repo from Mac OS to another system. Files start with '._' are Mac OS meta files generated by tar command. So look at this question to avoid '._*' files: Tar command in mac os x adding "hidden" files, why?

Docker - /bin/sh: <file> not found - bad ELF interpreter - how to add 32bit lib support to a docker image

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.

Gitlab import: Could not locate Gemfile

I have a test VM with Debian Wheezy and no ruby installed. Gitlab 6.9.2 has been installed using the provided installer which brings an embedded ruby. Now, I want to import some old repos into Gitlab, but I cannot find the correct procedure. I think it should be this way:
su - git
export PATH=$PATH:/opt/gitlab/embedded/bin
cd ~
bundle exec rake gitlab:import:repos RAILS_ENV=production
Though, I only get the error "Could not locate Gemfile". I have tried several other ways, also installing Debians ruby, searched multiple Google and StackOverflow results, but I couldn´t get it to work.
You should first place the bare repos in the repo dir. The default path for omnibus is /var/opt/gitlab/git-data/repositories/<namespace>. Then you just run the rake task:
sudo -u git -H cp -r my-project/.git /var/opt/gitlab/git-data/repositories/<namespace>/my-project.git
sudo gitlab-rake gitlab:import:repos
See invoking rake tasks and the import mechanism.
Edit: Sent an MR upstream to include this info in the readme.
I have run into same issue with "Could not locate Gemfile". So I searched for Gemfile and tried several folders. Until it worked.
The solution is related to Gitlab from source (or in my case it run inside offical docker container).
Place your .git bare repository (or several of them) in
"/var/opt/gitlab/git-data/repositories//my-project.git"
switch to user "git".
su git
Try if you have correct PATH by just "rake". If not available, extend your PATH:
export PATH=$PATH:/opt/gitlab/embedded/bin
after that switch to the shell were the rake command to import your bare projects will work and do the import.
cd /opt/gitlab/embedded/service/gitlab-rails/
bundle exec rake gitlab:import:repos RAILS_ENV=production
Output will be similar to this:
Processing raspberry/apollo-web.git
* Created apollo-web (raspberry/apollo-web.git)
Processing raspberry/apollo-web.wiki.git
* Skipping wiki repo
Processing dhbw/dhbw-prototyping-node-rest-course.git
...
EDIT:
Ok I was happy a bit too early. Although the ouput says it was imported. On the Web GUI no new projects.
I will investiage further...

Timeout when deploying to Heroku

I'm pushing a Haskell web app with quite a few dependencies to heroku, which requires heroku to download and compile all of the dependencies.
Things consistently seem to just "stop" suddenly in the logs after a certain amount of time. The log just stops abruptly mid-line, at a slightly diferent spot each time
After looking at heroku logs it seems that time between the receive and the message saying it is beginning to compile and the time between the end of compilation is always exactly 15 minutes (plus a few seconds). The buildpack works fine, I've used it for applications in the past. If I remove all required packages from the dependencies, it attempts to compile my web app and it behaves exactly as it would on a local machine (it stops after complaining about a missing dependency). I've done this with almost all of the required packages one-by-one ... but when I put a lot of them together, it fails in much the same way.
Here is an example of what the log looks like:
... (lots of similar lines, nothing out of the ordinary) ...
Downloading feed-0.3.9.1...
Downloading texmath-0.6.4...
Configuring resource-pool-0.2.1.1...
Building tagsoup-0.12.8...
Building resource-pool-0.2.1.1...
Installed hexpat-0.20.3
Configuring case-insensitive-1.1...
Building case-insensitive-1.1...
Configuring void-0.6.1...
Installed resource-pool-0.2.1.1
Configuring unordered-containers-0.2.3.2...
Installed case-insensitive-1.1
Downloading http-types-0.8.1...
Building void-0.6.1...
Building unordered-containers-0.2.3.2...
Building lifted-base-0.2.1.0...
Confi
! Push rejected, failed to compile Multipack app
And every time, it stops on a different line and a different place in the line. I've tried it with various buildpacks and it all seems to work the same way.
All signs seem to point to a timeout error ... the most incriminating being the fact that all failed attempts are exactly 15 minutes long ... or maybe a disk space error. Does anyone know if there is a way to increase the timeout or speed up compilation? Is it possible for me to pre-compile all of my binaries?
It is possible to compile the binary on your machine, commit it to the Heroku-only repository and push that. Here is a script I used to deploy:
#!/bin/sh -e
cabal configure
cabal build
cd heroku # this is my Heroku-only repository
git rm -r *
cp ../dist/build/labyrinth-server/labyrinth-server .
cp -r ../public/ . # this is the static files directory
echo "web: ./labyrinth-server" > Procfile
touch requirements.txt # pretend to be a Python app, otherwise Heroku doesn't know what to do
git add *
git commit -m "New version."
git push
cd - >/dev/null
Note: I switched to OpenShift later, and the OpenShift deploy script has a few enhancements.
Another note: you should make sure you are building on a machine with compatible architecture and link against an existing libc version. You can check all that yourself if you SSH onto the server manually (rhc ssh or heroku run bash). For the current OpenShift, a Debian Wheezy x64 machine produces compatible binaries.
Wrote a write-up addressing this problem and full step-by-step instructions for such a pre-compiled deploy.

Google protocol buffer installation failling in windows xp

I am trying to run these commands given in the read me of protocol buffer
$ ./configure
$ make
$ make check
$ make install
when I give ./configure, I get the error
bash: ./configure: No such file or directory
First of all, it seems like you haven't got to the right directory that has the executable file "configure."
If your goal is to install protocol buffer on Windows, specifically for Java, you can do the following steps:
Download 2 files from http://code.google.com/p/protobuf/downloads/list (get the most up-to-date version)
protobuf-2.4.1.zip
protoc-2.4.1-win32.zip (this is the pre-compiled file for easy install)
Follow instructions in README from the downloaded protobuf
Install Apache Maven
Follow instructions in README in the downloaded Apache Maven
Step 3 is the one that I spent a lot of times since I hadn't read the whole documentation in the first place and did a harder way. I suggest to do step 3B since it takes me 5 minutes instead of waiting to download cygwin.
[DIFFICULT]For compiling binary ourselves, download and Install cygwin (REMEMBER to select gcc)
Run ./configure, make, make check, make install
[EASY] Using pre-compiled binary:
Unzip protoc-2.4.1-win32.zip
Place protoc.exe in protobuf-2.4.1\src (notice that this is different than protobuf-2.4.1\java\src . Some people on the net is confused between these 2 files so they'll get "An Ant BuildException has occured: Execute failed: java.io.IOException: Cannot run program "../src/protoc"" exception and have to change the pom.xml file manually. If we place the protoc.exe in the correct folder, we don't have to modify anything as I'm aware of)
Place protoc.exe in PATH (i.e. protobuf-2.4.1\src)
Then, below is just the copy from README file
Check protoc by executing "protoc --version"
cd protobuf-2.4.1\java (which has the file "pom.xml")
run "mvn test", "mvn install", "mvn package"
Should not have any errors
you must run ./autogen.sh first

Resources