I need to build an application running on an embedded vendor supplied version of linux. According to the documentation it has libc version 2.8.90. I have built a simple application in C++ on a desktop and copied the binary across to the hardware along with copies of the libraries it is linked to. In order to remove any potential conflicts of linking against different versions of libraries I considered attempting to link to libraries statically. After some research I found the following question and answers and after reading through it gave the impression that linking statically is not a good thing to do. What I could not find here (or anywhere else so far) was a simple explanation of why this seems to be frowned upon. It would seem to me (pretty much a novice to linux) to be a way of solving my problem of bundling my executable as a single package and running it on my hardware but clearly it seems to be considered a bad idea but can someone please explain why??
Obviously I am aware that it would cause bloating of my binary but I am not worried about that. Additionally, I am aware of the licensing issues, but I am not concerned with that aspect of things particularly. This is not a commercial application so I do not think that it applies to me.
The advantages are, as you expect, a single binary that works without having to install the other dependencies and which you can easily move around.
The disadvantages are the size and the need to recompile the entire application if there's an update (e.g. a security fix) to the linked library and perhaps licensing issues (as you've noted).
Tradeoffs. If it solves your problem, go for it.
Related
I'm still a new node js developer, currently building a personal project, and I recently found out that there are open source packages available on npm similar to the thing I'm developing.
These packages carry new advanced concepts that I haven't come up with yet and provide more options than I want, but after thinking, it occurred to me why not develop a package that serves me in my project the way I want instead of using packages where I won't use more than 5% of the functions in my project?
Benefits of using an existing, well-supported module:
You save your development time for things that haven't already been written by someone else allowing you to make faster progress on your project
Well tested by the community (pre-tested code saves you lots of time)
Other people finding and fixing bugs (don't underestimate the importance of this)
The code will likely be kept up-to-date as tech changes over time
Possible community of people to ask questions of that knows about that package
Non-issues with using an existing, well-supported module:
Code size is rarely an issue for server-side nodejs development so the fact that a package may contain extra code that you don't need is generally not a practical issue of any consequence. If code size is paramount (like say you were running on a small, embedded system), then nodejs itself might not be the right environment as it's not exactly compact.
Reasons not to use an existing, well-supported module:
You aren't allowed to use open-source code in your project (but then you wouldn't be using nodejs if that was the case).
No existing module does what you want.
Existing modules that do what you want don't appear to be well supported or have many relevant bugs that have been open for a long time. In this case, it still might be worth if for you to clone the repository and use it as a starting point or learning point for your own module.
I'm still a new node js developer, currently building a personal project, and I recently found out that there are open source packages available on npm similar to the thing I'm developing.
IMO, this is part of the magic sauce of doing nodejs development. The huge repository of open source packages (through NPM) that are so easy to use make your development far more productive than developing everything from scratch yourself.
why not develop a package that serves me in my project the way I want instead of using packages where I won't use more than 5% of the functions in my project?
Unused code doesn't really cost you anything of consequence in a server-side environment. If you really wanted you can use bundlers that support tree-shaking which removes the code you're not using.
The question that really matters is whether an existing module meets your needs or is closest enough that you only have to write a little bit of code in order to use it. If that's the case, then the question becomes this: "Why should I use my precious development time to write a package from scratch when I could use far less development time by using something that is already available for free, is already tested and is already proven and then spend that development time (I would have spent developing that package) on other things that advance my product/service further?
In many ways, this is really no different than using the fs module built into nodejs. You use it because it's already developed and already tested and saves you time over developing your own file access module. Yes, the fs module contains lots of code you may never need, but that's not the question. The question is whether it already contains the code you DO need.
When I make a new project. Say, a web app using Snap.
I generate the skeleton using snap init barebones, make a new sandbox and then install the dependencies.
This takes forever. Seriously. If you have ever worked with pretty much any other web framework (node.js with express, for example), the process is nearly identical but takes a fraction of the time. I'm aware that most node dependencies do not require any compilation but I find it really strange that this isn't considered a bigger problem. For example, I will never be able to run a Yesod app on my cheap VPS because the VPS isn't powerful enough to compile it and I can't really upload 500mb of precompiled libraries.
The question is, why doesn't the repository host binaries instead of just code?
.NET is also compiled (to bytecode) but I can use it's DLLs without any need for recompilation.
There are of course drawbacks of hosting binaries like more storage space needed, multiple binaries per library for multiple OSs... But all the problems seems insignificant to the huge benefits that you get such as
No more compile errors
Much faster setup for new projects
Significantly less memory needed
Knowing that a library doesn't support your OS BEFORE you find out for yourself
I have trouble seeing why cabal hell exists in the first place. If all the libraries were available for dynamic linking, wouldn't the need for recompiling simply not exist at all?
Currently, one has to try really hard to stick with Haskell in these regards. It seems like the system punishes me for trying out things. If I want to add a new library to my project I have to be sure I'm willing to wait for 15-45(!!!) minutes for it to compile. Not to mention that a library fails to compile way more often than I'm comfortable with. After surviving the process, only then can I actually figure out if that library is what I want to use, or if it's even compatible with the rest of my project.
In a nutshell: because native code is hard.
If you want to host binaries for arbitrary systems, you have to match the binaries to each system you want to run on. That may mean compiling dozens of sets of binaries to support all of the systems the code will compile on.
On the other hand, you may well find that someone has compiled the code you need: your distribution provider may well provide packages for the Haskell libraries you need.
Because that's the easiest way to distribute everything while keeping it up to date. By offloading build costs to the users, library authors only need to provide source code.
This can be mitigated in various ways. For example, my CI setup uses CircleCI and Heroku. Nodes on both hold precached cabal sandboxes (it's actually very easy to set up). I build my project on Heroku, but there's no reason why you couldn't take prebuilt artifacts from your CI and deploy them directly.
As for dynamic linking, there's a possibility to link Haskell modules dynamically, but shared libraries more often than not are a source of problems. One look at Windows DLL hell should be enough to see this, and most commercial applications simply ship DLLs they use anyway. If a library changes, the DLLs have to be replaced anyway, and the way Cabal does it makes it simplest to have latest and greatest versions of everything.
First, note that on some platforms, you can in fact install binary libraries. For example, on my OpenSUSE Linux system, YaST will quite happily download and install certain Haskell libraries, without having to build anything from source.
Of course, this only covers a fairly small set of libraries, and all the RPMs will be many months out of date. (Not a big deal for X11, kind of a deal-breaker for something like Yesod that's under heavy development...)
I think another big part of the problem is that if you compile a Haskell library with GHC 7.6.4, then you cannot use that binary compiled library with GHC 7.8.3. So we're not just talking about one compiled binary for each OS; we're talking about one compiled binary for every OS + GHC minor point-release combination.
Oh, and did I mention? If you compile Yesod 1.4.0 against ByteString 0.9.2.0, then that compiled binary is useless if your system has ByteString 0.9.2.1 installed. So you potentially need one compiled binary for every OS, every GHC release, and every release of every library that it transitively depends on.
...This is partly why the Haskell Platform was invented. It's a single binary download that gives you a big heap of code that you don't need to compile from source, and where all the versions of the libraries in it are mutually compatible. (No dependency hell - the Haskell Platform maintainers sort that out for you!)
I do agree that binary packages would be extremely nice to have. But the above problems make it unlikely, IMHO.
I've been using the MonoTorrent library for a couple of weeks now and am looking for any kind of feedback or recommended alternatives.
The only issue I have with the library so far is that it is MUCH slower than uTorrent, I am not sure if this is a configuration issue or whether it doesnt support a required feature etc, but I require higher speeds for my needs and I found that for the exact same file I can get a major difference (times 100) in terms of the numbers of seeders and speeds.
I wanted to give libtorrent a try as well but have not been able to even compile the windows dll, let alone write the required code to use it :-)
I probably don't know much about the history of the torrent protocol but found it strange to find so little support in the C#/.Net world.
Was even considering wrapping the uTorrent client somehow, but it might be 'frowned upon' lol
Ended up using the libtorrent C++ library (running on a seperate process with added REST api to communicate with the main program) it works well and the torrent performance is excellent.
I was thinking about this the other day and wanted to see what the SO community had to say about the subject.
As it stands right now Common Lisp is getting some attention as a web development platform, and with good reason (of which I'm sure you are already convinced).
I was wondering how one would go about using a library in a shared environment in a similar fashion to PHP.
If I set up something like SBCL as an interperter to interpret FASL files like Python or PHP, what would be the best way to use libraries (like clsql for instance).
Most come as asdf installable libraries, but it would be a stupid amount of overhead to require and install the library each and every time a request is made.
Keeping in mind this is for shared hosting; would it be best to ..
1) Install system wide copies of the libraries for use in applications; reduces space, but there may be problems with using the correct version of the library.
2) Allow users (through a control panel) to install local copies for themselves; more space, no version problems.
3) Tell them to wrap it into a module and load it on demand like Python does (I'm not sure if/how this can be done with Lisp). Just being able to load a library for use would be the best option, but I don't think a lot of them are designed to be used this way.
Anyways, looking to hear your opinions, thanks.
There are two ways I would look at it:
start a Lisp for each request
This way it would be much better that the Lisp is a saved image with all necessary libraries and data loaded. But that approach does not look very promising to me.
run a Lisp and let a frontend (web browser, another web server, ...) connect to it
This way you can either start a saved image or a Lisp that loads a bunch of stuff once and serves the requests.
I like to use saved images/applications in a deployment scenario. They can be quickly started, contain all the necessary software and are independent of library changes.
So it might be useful to provide pre-configured Lisp images that contain the necessary software or let the user configure and save an image.
We are cross-compiling an application for an embedded Linux target under desktop Linux. For testing and other purposes we are using statically linked libraries with our application. The testing library we are using is CMockery.
My question is: Where should the static libraries and include files for CMockery live, given that we are cross-compiling?
If we weren't cross-compiling, things should go in /usr/local/lib.
Some suggestions from our team have been:
/opt/google/lib and /opt/google/include
/opt/embeddedLinuxDistro/usr/local/share/google/lib (and include)
/usr/local/arch/lib (and include)
Any pointers appreciated!
Note: After writing this answer, my summary would be:
Keep anything that is non-standard to the Linux distro you're using separate. In fact keep files for different projects separate even if they share libraries. This will make it much easier to move your files to another machine, to setup multiple complete builds for testing, and most importantly to be able to recreate the build starting from scratch.
The decision is really subjective.
Do you just need one copy of the library for all users?
Does it rarely change?
If your build machine caught fire and you had no backups of that machine, how quickly and easily could you re-build your environment of libraries and cross-compilers?
I ask these questions, because if the library changes often or different users may need different versions, you're better off having it be portable. That is, you can specify in your build where to find the files.
Of your team's suggestions, I would lean towards a path that contains a reference to your project. This will make it easier a year from now (when someone asks you to setup another build machine) to reproduce everything.
Lastly, I wouldn't worry about trying to adhere to "standard" library locations because you're not creating and managing a Linux distribution. Furthermore, most people don't really know anything more than "/usr/lib" and /usr/local/lib" and even the people that know those do not know the difference.
Do what's best for your project no matter what that may be.