How to build linux tools via MingW64 in 2021? - mingw-w64

I'm trying to follow this example to build wget2:
https://gnutoolchains.com/building/
I've installed x86_64-8.1.0-win32-seh-rt_v6-rev0 preset (?) and first tried to build old version of wget1, but I've reached dead end. There is no way to run ./configure to create build target rules. Did I install something wrong? How I'm supposed to know what exactly is to install? Is it each new preset for each application I want to build? How I'm supposed to handle the insane list of requirements of wget2:
https://gitlab.com/gnuwget/wget2#build-requirements
And lastly - why is it so jank? Is it by design?

There is a way to run ./configure on Windows. You need MSYS2 for that, which will give you a bash shell and the tools needed by ./configure.
MSYS2 comes with a package manager (pacman) which allows you to install a more recent MinGW-w64.

Related

GSL not installing in Windows 10 using GIT Bash

When I run the config file for installing GSL library for Windows 10 I get the following error:
error: Something went wrong bootstrapping makefile fragments for
automatic dependency tracking. If GNU make was not used, consider
re-running the configure script with MAKE="gmake" (or whatever is
necessary). You can also try re-running configure with the
'--disable-dependency-tracking' option to at least be able to build
the package (albeit without support for automatic dependency
tracking).
If I run ./config MAKE="gmake" I still get the error. I have searched in StackOverflow and on the web and still haven't found a solution.

Docker, AlpineLinux and Ubuntu - why does `node_modules` different

Environment
I do use CI/CD of gitlab to bundle my application.
I do use node:14-alpine as image and do run yarn to build my app.
After build is finished, I do deploy my app via rsync to the target-server, which run's ubuntu 20.04.
On this server, I do use pm2 to start the app and keep it running.
Issue
If I look into the logs, I do see an error like this:
I've searched a bit, and found that the issue might be caused of musl-dev is missing.
I've installed it at my server, and into the docker-container, but with same result.
BUT, if I do delete the node_modules directory from server, and run yarn install right at the Server, the app run like expected
Question
So why does this issue happens here? Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
Don't use an Alpine image if you're deploying on Ubuntu.
So why does this issue happens here?
The fundamental C standard library implementation is different on the two (Alpine uses musl libc; Ubuntu and more or less all other distros use GNU C Library (glibc)).
Trying to move binaries (such as those that might appear in node_modules for native modules) built against one libc implementation to a system using the other will likely be painful or not work at all (as you noticed).
Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
If none of the dependencies use native code, then you should be able to just move things over without issues, but otherwise it'll be easiest (e.g. considering the versions of other libraries your dependencies may link against) to just use the same version as your target OS – or, if you don't want to think about that, just deploy your application as a Docker container.
Even if the suggestion from #AKX is a good answer, I've played a bit around to figure out how to solve this special case.
Here is my solution:
install musl-dev at the server
link it to /lib
apt-get install musl-dev
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
In my case it's only this single dependency which cause the trouble. If I got more of this, I will follow AKX's suggestion and choose a debian/ubuntu-like distribution to bundle it.

Troubles when installing Hep-math

I'm using Moba-Xterm terminal on windows. I'm trying to install hep math-1.4 using the PDF manual. I need to use prefix to send it to the right location but i don't know where to that right location is.
After downloading the package and extracting it on desktop i have to do the following;
./configure --prefix= something
make
make install
So when I do prefix=Desktop I got an error?
Help to find a good prefix?
i'm new on linux
You don't need to use a custom prefix if you run the last command as root:
./configure
make
sudo make install
Update: The next page of the manual says
If you want to generate Python extension modules with HEPMath you may want to use
a prefix that your Python interpreter searches for Python packages. On my system this
is $HOME/.local.

How to install spice-server correctly in CentOS 7 with the source codes?

I met a problem when I try to install QEMU with spice support.
It works well if I install spice-server with yum. In this case when I type ./configure --enable-spice in root directory of QEMU's source codes, the spice-server can be detected correctly.
But now I want to install spice-server by compiling its source codes, cause I have some work to do with it.
I tried ./configure; make; make install and ./configure --prefix=/usr; make; make install. QEMU couldn't find spice-server installed in neither way. I just got
ERROR: User requested feature spice
configure was not able to find it.
Install spice-server(>=0.12.0) and spice-protocol(>=0.12.3) devel
returned.
I don't have this problem in ubuntu, I don't know how to fix it in a CentOS server. Does anybody have a solution?
I guess you are trying to build qemu with spice from source code.
That involves many dependences and configurations.
Especially while you have system-installed 'qemu' running.
Maybe https://github.com/grizzlybears/sqb can help you.
It is a set of helper scripts to automatically do the follwing:
Install build depend.
Get code from offical repository
Get 'fedora base cloud image' as test image
Autogen/configure/build qemu with spice in local dir, touch nothing in system
5.Run test VM using our hand-made 'qemu'
Open spice console to the VM, if you have 'spice-gtk-tools' installed
You should first clone spice-protocol manually and execute ./autogen.sh && ./configure &&make &&make install and export the PKG_CONFIG_PATH export PKG_CONFIG_PATH={your pkg config path}

Lua cannot find LuaRocks-installed modules on Linux

I installed the luarocks package on Linux Mint, and afterwards installed a couple of rocks such as sudo luarocks install telescope, but when running a script via lua script.lua, require cannot find the module.
Meta: Doing this Q&A style, because while questions that answer this exist, none seem to be generically titled or easily findable, and I hope that I can help someone with this.
In this specific case, the problem was simply that on my distribution, the default Lua version installed was at the time of writing this 5.2, whereas the LuaRocks package was built for 5.1, meaning that Lua 5.2 could not find the rocks due to using different paths for modules.
The solution to the problem was downloading the LuaRocks source code from its github repository, and compiling it for 5.2
./configure --lua-version=5.2
make build
sudo make install
To make sure I can also install packages for LuaJIT, which as of the moment uses 5.1 libs, I have also executed the above lines with lua-version=5.1 beforehand (if I executed them after, the default luarocks command would point at the 5.1 build.
To build LuaRocks, you need liblua5.2-dev and/or liblua5.1-dev
The solution for me is this.
I try
eval "$(luarocks path)"
and it works.
Hope it works for others.

Resources