I'm writing a library and a project that uses the library. It's common with Stack to put both in the same folder and maintain a multi-package project, but I want to have two separate projects instead. Stack has external dependencies for this, but they are specified by locations and the projects runs on multiple machines, so it's inconvenient to specify paths (unless it's possible to have nested Stack projects, but this kinds defeats the purpose of having separate packages). I also don't want to use git locations, because it feels burdensome to explicitly specify commits (or maybe this is not necessary?), and don't want to pack the repository in an archive and then download it each time I change something.
Ideally, I would like to be able to install the library on a machine and then reference it somehow in stack.yaml by its name, not location. Is it possible to do that? If not, is there a convenient way to maintain two separate but related packages?
I ended up using git submodules as jonrsharpe suggested. Works good so far.
Related
I'm trying to write a small web application fully in Haskell. I have 3 logical packages:
A backend, using servant
A frontend, using reflex, reflex-dom and servant-reflex
A shared package defining the Servant API for communication between the 2 and some data types for that API to use.
That last package is giving me trouble. I don't know how to structure the project so the other 2 packages can use it. I see 2 options at the moment:
Each package has its own stack file and git repository. Import the shared package using an extra-deps git link. The problem with this approach is it means I have to push any change to the shared package to GitHub before I can test it out with the other packages. Also I'd have to build everything separately.
Use a single repository with a single stack.yml file. I'd prefer this, since it keeps everything together and also assures all packages are using the same resolver. In this case I would list all the packages in the packages: option. However, the client needs to be compiled with GHCJS, not GHC, and I don't see an option in the documentation to override the compiler for 1 specific package.
Is there a way to make option 2 work? Or is there a better way to do this?
The recommended approach is to have two stack project files (e.g. stack-frontend.yaml using GHCJS and stack-backend.yaml using GHC), and then use the --stack-yaml argument to switch between them (e.g. use stack --stack-yaml=stack-frontend.yaml build to build the frontend, and stack --stack-yaml=stack-backend.yaml build to build the backend). Both stack-*.yaml files can include the shared servant API.
I have a locally authored Haskell project, which produces both:
a binary executable, and
several new Haskell modules, which I'd like made accessible to my other, Haskell based, executables.
After:
stack build
stack install
I'm finding that:
the binary executable (#1, above) runs just fine from any directory.
But, the new Haskell modules (#2, above) are only found when I'm running from within my project directory! (That is, for any executable other than #1, above.)
I need to be able to find the new modules from anywhere.
How can I achieve this?
Each stack project is in its own sandbox, so the compiled modules can only be used within that project. Compiled dependencies (which come from a stackage snapshot) sometimes get shared between projects.
Note that you can list a relative path in the packages list, and point to this package. It will get built again, but it can be directly used in another project this way. Why the extra building? Stack has a different model of projects than cabal-install - it does not allow mutations to the package DB to affect how your other projects build.
One option for sharing such a package is to have it in a git repo and use https://docs.haskellstack.org/en/stable/custom_snapshot/ , but that stuff is still a bit new.
Project A contains a few functions and data models I use in diff't repos, all tied to the same product. I'd like to turn them into an npm module, but without extracting the code from project A.
When I see other modules on npm, they generally tie to a github repo that contains all the source code, as well as a full stack to run/modify the module.
Does this mean I have to extract the code from project A into its own repository, build/configure a stack to allow it to run in isolation from project A, and then import it back into project A & other projects?
Or is it possible to just export the functions w/o a full stack, and without moving the code from my main project?
an attempt to pre-empt 'duplicate' comments:
this Q talks about working with an existing module, which doesn't answer my concern, as it has to do w/ worrying about pull requests being merged on time
npm link, discussed here, looks like it'd do the trick if I'd already extracted the code from the project, but I'd like to avoid that.
If you really want to share a snippet through npm but still use the code at the same place in your project, you could extract the code into its own repo, but still use it inside your project as a git sub-module.
Create a submodule repository from a folder and keep its git commit history
Do you know if it's standard for npm modules in their own repos to include the full stack for running them?
Ideally, it's to test them and ease the development, but it's totally optional. You could only put a JavaScript file and the package.json and it would work.
How can we organize our file-system and processes if there are multiple publishers on an npm module? Do we need a common repository (ex. GIT) or is there a smart way to use npm's own publishing & updating process?
The main issue I can't get my head around is that the initial publisher of the package is not able to get the latest version from within the package itself, is he? Unless he installs it as a dependency on another package and then updates & publishes from within that dependency.
It's certainly possible to use npm in this way, although it's probably not what it's intended for. The initial publisher of the package will create the necessary structure, e.g. using npm init, then do an npm publish to make it available to your other publishers.
A couple of things to consider. First, unless you have a tightly controlled system for taking it in turns to edit, you'll almost certainly get merge conflicts, where multiple publishers make changes simultaneously. A version control system like Git can help resolve these conflicts.
Secondly, you may not want your intermediate versions to be publicly available, for many reasons - it's likely you'll want to push out some (possibly incomplete) changes for your co-publishers to build on. Or you just might not want your code to be out in the wild during development. So you may wish to consider a private repository if you do go down this route - e.g. Sinopia, or one of the hosted solutions.
Hope that helps. For info, I use a combination of mercurial (for version control) and a private npm repo (sinopia).
In doing research on whether Node's node_modules should be checked into your version control repository, the most recent consensus seems to be that you should include it for applications you deploy.
Sources:
Check in node_modules vs. shrinkwrap
Should I check in node_modules to git when creating a node.js app on Heroku?
https://www.npmjs.org/doc/misc/npm-faq.html#Should-I-check-my-node_modules-folder-into-git
In reading these arguments, it lead me to question whether Composer's /vendors directory should also be checked into version control. Composer's documentation suggests that you don't:
Should I commit the dependencies in my vendor directory?
The general recommendation is no. The vendor directory [...] should be added to .gitignore/svn:ignore/etc.
The best practice is to then have all the developers use Composer to install the dependencies. Similarly, the build server, CI, deployment tools etc should be adapted to run Composer as part of their project bootstrapping.
While it can be tempting to commit it in some environment, it leads to a few problems:
Large VCS repository size and diffs when you update code.
Duplication of the history of all your dependencies in your own VCS.
[...]
Contrasting that argument is this one (source):
Doesn’t checking in node_modules create a lot of noise in the source tree that isn’t related to my app?
No, you’re wrong, this code is used by your app, it’s part of your app, pretending it’s not will get you in to trouble. You depend on other people’s code and they are just as likely to write new bugs as you are, probably more so. Checking all of that code in to source control gives you a way to audit every line that ever changed in your application. It allows you to use $ git bisect locally and be ensured that it’s the same as in production and that every machine in production is identical. No more tracking down unknown changes in dependencies, all the changes, in every line, are viewable in source control.
In summary, the question is this: Why would one gitignore (i.e. not version control) node_modules but not do the same for Composer's vendor/ directory?
The reason to commit external dependencies is
it's easier to deploy with git pull
the code used is directly included in the commit anyone checks out
Arguments against this are
Git is no deployment tool
all dependency managers do have a way to make exactly sure the code used can be fetched
I don't know anything about npm, but for Composer that last point is implemented by committing composer.lock.
I don't think the "audit code" argument is a valid one in every case. If you do write software that get's audited by itself, and subsequently needs all libraries to be audited, then probably all code changes need to be conserved. This isn't true for the general case.
git bisect works still as well with a committed composer.lock. It does require installing the dependencies with every bisect step, but this is just one simple step, probably already done in the automatic test suite run.
The last thing to worry about is when used packages go offline. This really is more of a problem with Composer, because there is no central hosting of the downloadable releases (npm probably does this). If this is a problem, either commit the code (and try to figure out how to update that missing package in the future), or setup an instance of Satis to create a local copy of the code you use.
Putting all your modules in you VCS makes it really heavy to download or upload. For example, I work on two node.js projects and the total node_modules directories size is between 250MB and 500MB whereas the whole code with assets is generally less than 40MB. Every Node.js developer likes Node lightness, so the code must stay easy to download and share.
For the second point, an alternative to avoid regressions is to be more restrictive in your package.json dependencies versions. You will find more information here.
Finally the best argument is to take a look on the famous modules everybody know :
express ignores
node_modules
mocha ignores
node_modules
q ignores node_modules
...
The more I research this the more I'm starting to think that there is no one correct answer to this, just different opinions as well as pros and cons of each method.
This blog, looking from the context of Bower, does a good job of weighing the pros and cons of each: http://addyosmani.com/blog/checking-in-front-end-dependencies/.
In short: At least for right now, weigh the pros and cons and decide what best fits your situation.