I attempt to install multiple versions of CouchDB databases, say 1.1.0 along side with 0.10.0.
By using build-couchdb I was able to get the latest version up and running with no problems, now I am trying with installing a second version (0.10.0) but with no success so far. Following with the instructions, I've tried:
rake git="git://git.apache.org/couchdb.git tags/0.10.0" install=/full/path/to/couchdb/dir
It does a bunch of installs but fails at the end with "rake aborted!".
Have anyone successfully done this ?
Build CouchDB can be slightly brittle. In production, what I've seen is a lot of complete wipes and complete rebuilds. Since people tend to build only once, the build time is not a huge pain-point.
Next, try to use the Erlang shortcut for installing side-by-side CouchDB builds. (Search for couchdb_build in the README).
rake git="git://git.apache.org/couchdb.git tags/0.10.0" \
install=/full/path/to/couch/dependencies \
couchdb_build=/full/path/to/couch/0.10.0
rake git="git://git.apache.org/couchdb.git tags/1.1.0" \
install=/full/path/to/couch/dependencies \
couchdb_build=/full/path/to/couch/1.1.0
With the install locations identical, Build CouchDB should skip the entire process to build and install dependencies when it builds 1.1.0. This includes:
Erlang
OTP
Javascript
I believe this technique is used more often than the simpler one for side-by-side builds. Therefore it is possible this workaround will fix your error.
If you still have issues, it is probably a bug. Would you please submit a Build CouchDB issue indicating your operating system version and also attach your rake.log file?
Related
When I run the config file for installing GSL library for Windows 10 I get the following error:
error: Something went wrong bootstrapping makefile fragments for
automatic dependency tracking. If GNU make was not used, consider
re-running the configure script with MAKE="gmake" (or whatever is
necessary). You can also try re-running configure with the
'--disable-dependency-tracking' option to at least be able to build
the package (albeit without support for automatic dependency
tracking).
If I run ./config MAKE="gmake" I still get the error. I have searched in StackOverflow and on the web and still haven't found a solution.
I'm trying to follow this example to build wget2:
https://gnutoolchains.com/building/
I've installed x86_64-8.1.0-win32-seh-rt_v6-rev0 preset (?) and first tried to build old version of wget1, but I've reached dead end. There is no way to run ./configure to create build target rules. Did I install something wrong? How I'm supposed to know what exactly is to install? Is it each new preset for each application I want to build? How I'm supposed to handle the insane list of requirements of wget2:
https://gitlab.com/gnuwget/wget2#build-requirements
And lastly - why is it so jank? Is it by design?
There is a way to run ./configure on Windows. You need MSYS2 for that, which will give you a bash shell and the tools needed by ./configure.
MSYS2 comes with a package manager (pacman) which allows you to install a more recent MinGW-w64.
Environment
I do use CI/CD of gitlab to bundle my application.
I do use node:14-alpine as image and do run yarn to build my app.
After build is finished, I do deploy my app via rsync to the target-server, which run's ubuntu 20.04.
On this server, I do use pm2 to start the app and keep it running.
Issue
If I look into the logs, I do see an error like this:
I've searched a bit, and found that the issue might be caused of musl-dev is missing.
I've installed it at my server, and into the docker-container, but with same result.
BUT, if I do delete the node_modules directory from server, and run yarn install right at the Server, the app run like expected
Question
So why does this issue happens here? Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
Don't use an Alpine image if you're deploying on Ubuntu.
So why does this issue happens here?
The fundamental C standard library implementation is different on the two (Alpine uses musl libc; Ubuntu and more or less all other distros use GNU C Library (glibc)).
Trying to move binaries (such as those that might appear in node_modules for native modules) built against one libc implementation to a system using the other will likely be painful or not work at all (as you noticed).
Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
If none of the dependencies use native code, then you should be able to just move things over without issues, but otherwise it'll be easiest (e.g. considering the versions of other libraries your dependencies may link against) to just use the same version as your target OS – or, if you don't want to think about that, just deploy your application as a Docker container.
Even if the suggestion from #AKX is a good answer, I've played a bit around to figure out how to solve this special case.
Here is my solution:
install musl-dev at the server
link it to /lib
apt-get install musl-dev
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
In my case it's only this single dependency which cause the trouble. If I got more of this, I will follow AKX's suggestion and choose a debian/ubuntu-like distribution to bundle it.
I keep totally failing to update an etherpad-lite server. The problem: Even a Google search for the update procedure brings up hardly any information, only that one should run "git pull origin".
I have now tried this in many different ways. The update usually works, but afterwards one of these errors occurs:
Plugins can no longer be installed
The service can no longer be started (TypeError: log4js.replaceConsole is not a function)
The entire admin panel no longer works.
I tried uninstalling or updating all plug-ins before, but both hardly brought any improvements, only other errors. The update of the plugins in the admin console fails, I tried it via the updatePlugins.sh script. Here a message appears that at least etherpad-lite 1.8,6 must be installed. I am currently at version 1.8.4 and would like to update to the latest version 1.8.12. However, some of the plug-ins are still updated. A very strange behaviour.
I would be happy if someone could tell me how to properly update the etherpad-lite instance step by step. (ubuntu 20)
Thank you!
I have recently updated Etherpad-lite from version 1.8.6 to 1.8.13.
For me, executing git pull origin and then checking out the 1.8.13 release tag, with git checkout 1.8.13, made the work.
It is important, despite having Etherpad configured as a service, to run it for the first time using:
src/bin/run.sh.
Node v12.22.1
npm 6.14.12
Ubuntu 20.04.2
I hope it has been useful to you.
This post summarize my painful but finally successful (just by chance) way to build own conda package for the
netgen meshing tool with Python interface. I found the recipe for the netgen build due to tpaviot.
After cloning the repository into 'netgen-conda' folder I ran:
conda build netgen-conda/netgen-6.2-dev
Which reports "Unsatisfiable dependencies": 'oce', 'gcc-5', 'binutils'.
So I tried to install these packages myself. Unfortunately the documentation do not emphasize the important fact that 'conda build' use its own temporary environment so it doesn't matter what you have installed (see). Nevertheless even installing 'gcc-5' together with 'binutils' manually turns out to be nearly impossible.
Hint for other newbies: Lot of my problems disappear after I learned details about channels.
First try was installing 'gcc-5' with 'binutils' from the 'salford_systems' channel suggested by anaconda:
conda install -c salford_systems binutils gcc-5
But it results in:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'salford_systems::gcc-5-5.3.0-0'.
LinkError: post-link script failed for package salford_systems::gcc-5-5.3.0-0
running your command again with-vwill provide additional information
location of failed script: /home/jb/miniconda3/envs/test/bin/.gcc-5-post-link.sh
Using verbose output ('-v') provides no more info. I was also confused by the fact that the script does not exist on the given path (probably automatically deleted).
With current experience I admit that the reason of problem can be dug out from the '-vv' output (reported issue). After some trying I found that only way to
install both is to first install 'gcc-5' into a clean environment and then install 'binutils'. Since 'conda build' installs everything
from scratch and there is no way to specify order of installed packages I was stuck.
Another issue that puzzled me is the 'conda build' long prefix hack. For unknown reason they use extremely long prefix for an auxiliary folder
which result in various kind of issues. I have faced to three such problems:
As is usual today, I have encrypted HOME causing a known issue.
Using a workaround '--croot /tmp' prevents creating the hard links from '/tmp' into 'HOME/miniconda3' since they are on different filesystems.
There is a fallback to use the copy. I even thought that the fallback doesn't work for a while, but it worked, just making the build running longer.
Trying to install 'gcc' (4.x) from 'default' channel complained about too short prefix. So ultimate workaroud was to set the length of the prefix manually
'--prefix-length 70'.
Finally, I found that the dependency on 'binutils' is not necessary and successfully build the package with:
conda build --prefix-length 70 -c salford_systems -c conda-forge -c dlr-sc netgen-conda/netgen-6.2-dev
Summary (of open questions):
Conda channels introduce a new kind of dependency hell already forgotten when using 'apt-get'. Is there a way to figure out what is a canonical channel for a package.
Does anyone succeed to build with combination 'gcc-5' and 'binutils'?
There is still lack of documentation about internal conda mechanisms and error messages do not provide clue to the problem.
Conda-build use a problematic prefix hack and lack ability to control order of installed packages. Does anybody know the reason for this hack?