Kronos-Haskell installs as a self-contained application and can exist (as near as I can tell) alongside an installation of the Haskell Platform without any issues or interactions. This is a nice feature, however I would like to use a current version of Haskell, along with some additional packages I've installed to the Haskell Platform.
Is there a way to get Kronos-Haskell to use my installation of the Haskell Platform?
You can build IHaskell yourself using this recipe from the README:
git clone http://www.github.com/gibiansky/IHaskell
cd IHaskell
./macos-install.sh
Note - it might take a while as there are a ton of dependencies.
Related
Currently, rust-toolchain.toml, allows specification of the development channel, target platform, and associated tooling (compiler, packager), etc. Unfortunately, the components key which accepts additional tools does not accommodate the stipulation of cargo-watch and trunk (a cargo alternative for WASM crates). As a newbie, their rejection seems strange. Their exclusion limits the amazing utility of rust-toolchain.toml to automate the tooling of a development environment.
The question is, am I missing something? Is there a way to integrate these tools into rust-toolchain.toml, is there some other way for them be a specified (apart from a shell script), or are they redundant?
Presently I manually install them: cargo install watch trunk. Yes, this is easy and simple but also undocumented, forgettable, and clumsy.
I must say, rust and its tooling is impressive.
As far as I know, the components key is specifically for toolchain internal components. These components are also toolchain specific, e.g. a rustup +stable component add rust-src is different from a rustup +nightly component add rust-src.
On the other hand, crates from crates.io (which is what cargo install can install) are essentially toolchain independent. So it makes sense to me that crates in general can not be specified by rust-toolchain.toml file, which is more about pinning the toolchain to a specific version.
However, about cargo plugins specifically, maybe you find a compelling way to propose this as new feature to cargo (e.g. allow specifying cargo plugins in config.toml).
No, the components of a rust-toolchain.toml are a specific set of tools developed and distributed by the Rust language team. It is only used to augment the built-in cargo commands.
I recently started exploring npm and installed a github repo yoonic/nicistore.
But when I try to build it fails.
My question is if I start building things on top of node, which I see has tooo many dependencies from different vendors, Am I completely on the mercy of the respective package developers?
I have seen that most node based github repos fail to build in the first try. If I update one of the modules by running a console command, Is it likely to break all the application?
And if It is does, doesn't it prove node.js an unreliable and unstable development platform?
Think of it as the opposite of most other languages.
You are writing an app in Java.
You want to use LibA, LibB and LibC.
So you try to use LibC 2.4, and as soon as you do, your manager throws all kinds of errors at you.
Why?
Because LibB is using LibC 1.9
So now what are your options?
Strip out all of the calls to all of the new API for LibC that you wanted to use...
...or hope that LibB is open-source, and you can contribute an update for a new version of it, so that you can use the latest version of LibC (and hope it doesn't update).
So now you've done that... but now you've broken your LibA, because it wants the old LibB.
You didn't even want LibA, you just had to have it for your app to be happy with your framework, and the libs that you did actually want to use (B and C). LibA is closed-source, and isn't maintained, anymore. Tough luck. Go back to your old ways, and forget about how much better life could be, if you could only use your framework with the new version of LibC. Or start praying that your framework does a major rewrite, to get rid of the LibA dependency... but then figure out what new hell you have to deal with, just to get LibC working.
Is this really better than Node?
What node allows you to do is install dependencies which are at different versions than the same library that your dependencies are using.
Not that you can't do that with Java, too... but the entire community has decided that it's just not ever going to try to do that, and thus outlaw it at a tool level.
Next, you see too many things which leave you at the mercy of too many vendors...
Going back to Java (or C++, or nearly any mainstream language), looking at Java, itself, how many libraries are made by Sun Microsystems, or by James Gosling?
Moreover, if you want to boil it down, to suggest using only, say, one huge, overarching framework (like Spring MVC) and using no other libraries of any kind (like JodaTime), then how many libraries does Spring itself lean on, and why are they of no concern to you, even if you're just using the compiled VM bytecode?
In fact, a strong argument could be made to be more wary of compiled binaries, in languages where it was traditional to see strong, copy-left licensing like that of the GNU GPL... in that realm, you open yourself up to craziness.
Most of the Node stuff, by comparison, is dirt-simple freeware. And even if it's not, it's quickly replaceable as most are micro-libraries.
I would suggest that updating a Node package your server depends on, via CLI is less hazardous than doing the same to a full-fledged Java project, if your goal is to see your project compile again, some time in the next week, but with the newer fixes/features...
...but if you're talking about a full-scale, production application, you also want to be cognizant of what it is you're doing, with regards to your codebase, regardless.
As to why things don't build for you on the first try, assuming that you're on a non-Windows platform, and your environment is up to date, I don't know.
Most C/C++ projects I clone don't build for me, first try, either. I usually forget something, or there was something poorly documented, or the actual project was set up to make unfair assumptions about the system it would operate in.
Does that mean that C++ is an unreliable/unstable development platform?
Or the hours/days spent on getting Eclipse set up in an enterprise environment, with all kinds of crazy, company-specific projects and project settings?
It sounds like a case of bad design, more than anything.
Then again, most of my projects these days are wrapped in Docker containers. They all run in the same environment, whether they're running in Windows, on a Mac, or on the server. That tends to take the sting out of building projects, regardless of what language the code is in, or what VM / processor they're running on.
You should also be using NPM shrinkwrap files, or Yarn Lockfiles to preserve the build configuration, with the known-working versions of libraries. And you should have unit and integration tests to ensure that changing library versions has no discernible impact on your system.
Practically speaking, are these essentially synonymous? Or is there something I'm missing? I've use Composer (PHP), CocoaPods (Objective-C), and Bundler (Rails). I believe they describe themselves as dependency managers but can they also be consider as package mangers?
I'd say yes. Given that the javascript community calls their version of those tools (NPM and bower) "package managers", I think that the development community has essentially synonymized those terms.
EDIT I'm going to backtrack a bit. In general, I think the term package manager has to do with the delivery and installation of third party code. That said, npm is correctly named the node package manager. As I see it, a dependency manager is a different thing. It's an runtime orchestration tool. For example, there are dependency managers that simply run in the browser to load asset files in the proper order (think requireJS, browserify, cartero, etc... - or think a Dependency Injection container in say Symfony2 or Laravel) but you wouldn't call those package managers. A package manager would be something like Debian's dpkg or the node community's bower, which actually downloads third party libraries for you (that aren't currently in your software suite). Now, I think the burred lines come in when package managers decided to be smart enough to resolve version numbers for us. Because tools like npm make sure that each piece of software we declare has all of the proper versions of it's dependencies (by downloading a chain of dependencies for us), we want to call it a dependency manager. But I think it's more proper to say that it's a package manager that happens to do version resolution. It's really more of a delivery mechanism than a runtime tool, though.
All that to say, I'd like to hear what others have to say about this.
No, they are not synonyms. Look At that answer for their difference
https://stackoverflow.com/a/27290095/4016254
I'm a good Java programmer, albeit the first languages I learnt were C/C++. Anyway, for work reasons, I switched to Java and web languages. Sometimes I get interested in this or that Linux project, usually coming as a git or svn repository... The problem is that I usually clone the repo, I try to configure it, I install all the needed libraries (and this takes ages), maybe finally I succeed... but then make fails or configure itself fails, complaining about some tool that is missing. Or maybe I installed two versions of the same library and the configure script gets the wrong one, or boring problems like this.
Anyway, I see loads of people using those tools everyday, so it must not be so difficult after all!
Can you point out resources that may help in the first steps?
Thanks
What you are referring to are known as autotools and make
autotools are used to generate scripts and build files that can be used to build as well as install a program/package/software (whatever you call it)
Here is wikipedia link for the GNU build system in general.
And refer to this link for details about the autotools and related stuff
It may take longer, but if you are interested in how to fix those problems yourself I recommend learning how autoconf and automake work. I made a positive experience with the book "Autotools: a practitioner's guide to Autoconf, Automake and Libtool". I read the dead-tree version but it is also available online: http://fsmsh.com/2753.
I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
You could use setuptools to create egg files for your libraries, assuming they aren't available in egg form already. You could then bundle the eggs alongside your software, which would need to either install them, or ensure that they were on the import path.
This has some complexities, i.e. if your libraries have C-extensions, then your eggs become platform-specific, but in my experience this is the most widely-accepted means of 'bundling' stuff in Python.
I have to say that this remains one of Python's weaknesses, though; the third-party ecosystem is certainly aimed at developers rather than end-users.
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
For developers use PIP with requirements file.
For end users, specify requirements in setup.py.