stack setup download and installs GHC for project,
~/.stack/programs,
~/.stack/snapshots
and somewhere else which I don't know yet)
stack build downloads dependencies and build them.
~/.stack/setup-exe-cache and somewhere else.
I want to clean up project-wide ghc and downloaded dependencies/build output from them, plus all the other project related things on my disk.
There's no way to do this other than just manually delete them?
stack clean command clears local cache in .stack-work.
The feature for cleaning .stack cache is not implemented yet. See this GitHub issue:
https://github.com/commercialhaskell/stack/issues/133
stack setup installs GHC for project but it stores GHC globally (so you don't need to install GHC again for another project if this project uses same version of GHC).
You can just do rm -rf .stack-work to clean project local build cache (built modules, github dependencies for project, etc.). Though, rm -rf .stack-work won't work for multipackage project. Just do stack clean --full to clear local cache completely for project.
To clean global cache, you can just do rm -rf ~/.stack.
Thus, again, citing latest comment from issue discussion:
The garbage collection question definitely needs to be answered in some form or another. If possible, I think I might find a documentation solution preferable to a new command. It would be great if the manual discussed the directory structure of ~/.stack and explained what directories were safe to delete.
Related
I work on a Rust project that has a lot of packages as explicit or implicit dependencies (~420). When I want to rebuild the target after changing the .env file (that configures things like IP to download files from), I would like to rebuild only the packages that I authored, not all the dependencies.
How can I tell cargo build to use the previously compiled dependencies, but not use the previously compiled package that uses the .env file as input?
Ideally, cargo build would realize that the .env file has changed and automatically decide to rebuild only the parts that use the .env file, but unfortunately this doesn't seem to be the case.
So the second best solution is to manually tell cargo build at which point in the build graph to start off again.
We're using the dotenv crate https://crates.io/crates/dotenv crate to read the .env file.
I tried cargo clean -p nextclade to tell it to clean only the package in question that I'm working on - but that still cleans up all the dependencies which cause my build to take 5 minutes rather than 2 minutes (if using compiled dependencies).
There's a question that seems to ask a similar question, but that question is actually a different use case/set up, so it's not a duplicate: How does cargo decide whether to rebuild the deps or not?
When I run "npm install" in a project it often modifies package-lock.json, for example if I work on the same project from another computer (with different node or npm version).
But at the same time the documentation suggests that the file is supposed to be added to version control (git in my case):
https://docs.npmjs.com/files/package-lock.json
This file is intended to be committed into source repositories, and
serves various purposes: ...
So should I commit the changes made by npm back and forth when switching work machines or when somebody else does npm install? This would be a nightmare.
Currently I just discard any changes to package-lock.json made by npm, and it's been working fine. So I might as well add it to .gitignore...
Am I doing it wrong? Should I use npm ci instead? I wouldn't call my computer a "CI", it's just a development machine, why should I use it there?
Basically I have the same question as this gentleman:
https://github.com/npm/npm/issues/18103#issuecomment-370401935
(Sadly I can't add a comment on that issue or create a new issue at all, the npm repo has issues disabled)
Yes you want to commit your package-lock.json file to source control. The reasoning behind this is to ensure that all of the same versions of each package are downloaded and installed for each user that pulls down the code. There are some other reasons to include the file such as tracking changes to your package tree for auditing.
Is it possible to install package from sources with something similar to stack build package-name? (latter works with packages on Stackage, but not with custom ones)
Um, stack build (within the source directory)?
Stack doesn't really have a notion of installing libraries though, it only installs executables. To “install” locally-sourced packages, you need to specify what for you want them installed: add them as dependencies to another project, via a location: field in the packages: field in that project's stack.yaml file.
That's arguably sensible since, one might say, there's nothing you can do with an installed library except invoking it in another Haskell project (or in a REPL, which you can get with stack ghci). I personally don't hold with that though, I like actually being able to say install that library now. Which is one of the reasons I have always stuck to good old cabal-install rather than Stack. With that, you can just
cabal install
from within the source directory.
Cabal-install has often been criticised: its local installs can easily get out of sync and then you have weird dependency conflicts and need to rebuild lots of stuff. I never found this that much of a problem, and anyway this has been adressed in recent Cabal through Nix-style builds, which never produce conflicts.
I am trying to learn cabal, and have tested several my own little projects, now I want to clean them up.
Basically, if I am working without a sandbox, my workflow is:
run cabal init
edit src/Mylib.hs, and then edit mylibname.cabal file
run cabal build
run cabal repl and test my code
run cabal install
Now, I see my own project:
installed into ~/.cabal/lib/x86-64-linux-ghc-7.10.1
registered in ~/.ghc/package.conf.d
I can write import Mylib in my other haskell source code, so I think the package is successfully installed.
Then I want to uninstall the package, as the package itself is just meaningless experiment code.
I read this article, who says that:
There is no "cabal uninstall" command. You can only unregister
packages with ghc-pkg:
ghc-pkg unregister
so I run
ghc-pkg unregister mylibname
Now, it seems that the package is unregistered in ~/ghc/package.conf.d, however, there is still a compiled library in ~/.cabal/lib/x86-64-linux-ghc-7.10.1.
So, how could I completly remove my project, could I just rm -rf the library in ~/.cabal?
You can delete the files yourself from the packages directory. However, the reason no command to do so is provided is there's in general no guarantee something may not have linked against them elsewhere, and so such deletions may cause breakages. That said, there's also a tool that goes and does the deletion for you if you really want it.
http://hackage.haskell.org/package/cabal-uninstall
And there's a tool with a bit more functionality that also lets you figure out what packages have no reverse deps, so at least no other packages break:
https://github.com/iquiw/cabal-delete
Checking in node_module was the community standard but now we also have an option to use shrinkwrap. The latter makes more sense to me but there is always the chance that someone did "force publish" and introduced a bug. Are there any additional drawbacks?
My favorite post/philosophy on this subject goes all the way back (a long time in node.js land) to 2011:
https://web.archive.org/web/20150116024411/http://www.futurealoof.com/posts/nodemodules-in-git.html
To quote directly:
If you have an application, that you deploy, check in all your dependencies in to node_modules. If you use npm do deploy, only define bundleDependencies for those modules. If you have dependencies that need to be compiled you should still check in the code and just run $ npm rebuild on deploy.
Everyone I’ve told this too tells me I’m an idiot and then a few weeks later tells me I was right and checking node_modules in to git has been a blessing to deployment and development. It’s objectively better, but here are some of the questions/complaints I seem to get.
I think this is still the best advice.
The force-publish scenario is rare and npm shrinkwrap would probably work for most people. But if you're deploying to a production environment, nothing gives you the peace-of-mind like checking in the entire node_modules directory.
Alternately, if you really, really don't want to check in the node_modules directory but want a better guarantee there hasn't been a forced push, I'd follow the advice in npm help shrinkwrap:
If you want to avoid any risk that a byzantine author replaces a package you're using with code that breaks your application, you could modify the shrinkwrap file to use git URL references rather than version numbers so that npm always fetches all packages from git.
Of course, someone could run a weird git rebase or something and modify a git commit hash... but now we're just getting crazy.
npm FAQ directly answers this:
Check node_modules into git for things you deploy, such as websites
and apps.
Do not check node_modules into git for libraries and modules
intended to be reused.
Use npm to manage dependencies in your dev
environment, but not in your deployment scripts.
cited from npm FAQ