Adding node_modules to repo makes ZSH slow - node.js

I recently added my node_modules directory to version control per this answer's advice.
(By the way, I'm not necessarily of the opinion that adding node_modules to version control is a good practice, but I'm trying to get a deployment working and I don't have anything else to try right now.)
My problem now is that every command I run is followed by about a five-second pause before I get my command prompt back. I assume that this is because I have an ~700MB node_modules directory.
Is there a way to speed up ZSH or do I just have to live with this slowness if I decide to check in node_modules?

Your question is not clear but what I infer is happening is:
You have zsh and/or your zsh plugins configured to include some aspects of git repo status in your prompt
thus each time zsh goes to render your prompt, it has to run one or more git commands
because your repository is so large, these commands tend to be slow
If this is indeed the case, the first thing you should do is alter your zsh configuration to keep these details out of your prompt. This can be done temporarily just while you are working on this particular project. That might alleviate the biggest pain point without much cost/effort.
Secondly, you could try to make node_modules as small as possible with npm dedupe. Then you could eliminate dev dependencies with npm prune --production so the dev deps could be local files but only the deps necessary for prod would be in git. That might require some clever/verbose configuration in .gitignore but may be workable.
But ultimately deps-in-git is the path to failure for reasons like this. Source Code Management is for Source Code.

Related

I am using coverity to analyse node-ts template for a service. What should I use to build it?

Steps:
Installed coverity
Configured compiler
cov-configure --javascript
cov-configure --cs
I am stuck at the build step of cov-build. Yarn is used to run and configure the service. But I am not sure what coverity wants here.
I tried a couple of npm run commands, every time end up getting this:
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
I also tried different compilers, but no luck.
What should be done in this case?
You need to do a file system capture for Javascript files. You can accomplish this by running cov-build with the --no-command flag.
cov-build --dir CoverityIntermedediateDir --no-command --fs-capture-list list.txt
Lets break down these commands:
--dir: intermediate directory to store the emitted results (used for cov-analyze later).
--no-command: Do not run a build command and to look for certain file types
--fs-capture-list: Use the file that is provided to specify which files to look at and possibly emit to the intermediate directory.
A recommended way to generate the list.txt file is to grab it from your source control. If using git run:
git ls-files > list.txt
I want to also point out that if you don't have a convenient way to get a file listing in order to use the --fs-capture-list command you can use --fs-capture-search command and pair that with a filter to exclude the node_modules directory.
The coverity forums have some useful questions and answers:
Node.js File system capture
Really, the best place to look is at the documentation. There are several examples of what you want to do in their guides.

how to make cabal sandbox aware of (installed) packages in other locations?

When I have a sandbox, it seems cabal install ignores packages in $HOME/.ghc/x86_64-linux-7.8.4/package.conf.d.
How can I configure the sandbox such that these packages become visible?
I am seeing a vague reference to --package-db=db in https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage
but I understand neither how nor when to use it. (with sandbox init? configure? install? none seems to work - none gives any error message either.)
I know about add-source but my question refers to installed packages.
The whole point of the sandbox is that it ignores your local package database.
If you want to share installations across many sandboxes, you may install to the global database; but then you should be very careful, as fixing the badness of a broken package is much more difficult. Keep it to really core packages that you expect to be widely shared across many, many projects -- not just the half dozen you're stressing out about right now for your job.
Alternately, you may share one sandbox between the builds of many packages; simply set the CABAL_SANDBOX_CONFIG variable to an absolute path pointing to the appropriate cabal.sandbox.config file. This is significantly safer, and much more flexible, as you can choose how widely your installed packages are shared (and in bad cases, simply nuke the sandbox and start over).
Here is something you can try - copy (or symlink) the files from ~/.ghc/{arch-os-ghc-version}/package.conf.d to the sandbox's {arch-os-ghc-version}-packages.conf.d directory.
There is a question about the package.cache file. The following procedure seems to be a safe way to proceed:
Start with an empty sandbox
Copy the package.conf.d files from ~/.ghc to the sandbox (including package.cache)
Add packages to the sandbox via cabal install --only-dependencies
I don't know if the package.cache file is required or if there is a way to rebuild it.
One disadvantage is that cabal install --only-deps seems to reinstall broken packages in the sandbox even if they are not required by your application. Maybe there is work-around for this.

Remove git-annex repository from file tree

I tried installing git-annex yesterday to backup my files. I ran git annex add . in the root of my repository tree and then a git commit. So far everything is fine.
What I didn't know git-annex was doing was turning my entire file tree into a whole bunch of symlinks. Every single file in my whole tree is now symlinked into .git/annex/objects! This is messing up my application which depends on files not being symlinks.
My question is, how do I get rid of git-annex and restore my file system to its original state? For a normal git repo I could do rm -r .git, but I'm afraid that won't do the job in git-annex. Thanks in advance.
Okay, so I stumbled upon some docs for git-annex, and they give two commands that achieve what I wanted to do:
unannex [path ...]
Use this to undo an accidental git annex add command. You can use git annex unannex to move content out of the annex at any point, even if you've already committed it.
This is not the command you should use if you intentionally annexed a file and don't want its contents any more. In that case you should use git annex drop instead, and you can also git rm the file.
uninit
Use this to stop using git annex. It will unannex every file in the repository, and remove all of git-annex's other data, leaving you with a git repository plus the previously annexed files.
I started running git annex uninit, but my god was it slow. It took about 5 minutes to "unannex" just a single file. My filesystem tree is about 200,000 files, so that was just unacceptable.
What I ended up doing was actually surprisingly simple and worked well. I used the cp -rL flags to automatically duplicate the contents of my file tree and reverse all symlinks in the duplicate copy. And it was blazing fast: around 30 seconds for my entire file tree. Only problem was that the file permissions were not retained from my original state, so I needed to run some chmod and chcon commands to fix up the permissions.
This second method worked for me because there were no other symlinks in my schema. If you do have symlinks in your schema beyond those created by git-annex, then my little shortcut probably isn't the right choice for you, and you should consider sticking with just git annex uninit.
I would like to include my own experience of using git annex uninit, in addition to OP's answer.
I didn't have full repository annexed, but only about 40 bigger files. After deciding that I have no particular benefit of using git-annex, I tried unannexing several files and it was over in several seconds per file. Then, I ran git annex uninit and it took more than a minute only for really huge files (more than few GB). Overall, it was done in about 20 minutes, which was acceptable in my case.
So, it seems that the complexity of unannexing increases with the size of annexed file tree.
If you have a v6 repository, you can do the following:
git unnannex . --fast
which replaces the symlinks w/ hardlinks instead of slowly replacing the symlinks with the original files again.
Only v6 repositories can execute the git-annex unannex command on uncommited changes, so it could be necessary to upgrade the git-annex repo to a v6 repository.
See the Official Upgrade Guide.
In my case I had to upgrade v5 -> v6 and I only had to execute
git annex upgrade
which took a few seconds and I was done.
Have you tried to use git-annex in direct mode?
Just change your repository with
git annex direct
This will not use symlinks any longer, but some git commands do not work with such annex repositories.
Check out the explanations on their website to see if this scheme fits your purposes.
Maybe the conversion process is faster then the previous mentioned tips.
I haven't tried it by myself with big repositories.

how to prevent needing "npm rebuild" everytime with rsync and Node.js on Linux

My deploy approach may be noob. I'm using rsync and it works for the most part on many node.js website except with certain ones where there are build dependancies for xml. I supposed I could try Git but was concerned with having bloat on the VMs and wanted to keep as lean as possible.
Is there a better way to do this using rsync options or should I try an alternative deployment approach. rcpy seems bad.
OR if I must "npm rebuild" then what is the command to do this if I created it in a shell script to automate this in terms of chaining commands?
UPDATE:
Using this approach:
write a shell script to ssh to a remote machine and execute commands
with npm rebuild
If you run exactly the same node.js version on exactly the same processor architecture, you don't need npm rebuild, since your binaries will work on the target without a change.
Otherwise there is no way to avoid it (except for removing binary dependencies entirely of course).

Linux configure/make, --prefix?

Bear with me, this one's not very easy to explain...
I'm trying to configure, make and make install Xfce into my buildroot build directory. When configuring I'm using
--prefix=/home/me/somefolder/mybuild/output/target
so that it builds to the right folder, however when it's compressed and run I get errors from various config files where it's looking for files in
/home/me/somefolder/mybuild/output/target
(which of course doesn't exist.)
How do I set what folder to build into, yet set a different root directory for the config files to use?
Do configure --help and see what other options are available.
It is very common to provide different options to override different locations. By standard, --prefix overrides all of them, so you need to override config location after specifying the prefix. This course of actions usually works for every automake-based project.
The worse case scenario is when you need to modify the configure script, or even worse, generated makefiles and config.h headers. But yeah, for Xfce you can try something like this:
./configure --prefix=/home/me/somefolder/mybuild/output/target --sysconfdir=/etc
I believe that should do it.
In my situation, --prefix= failed to update the path correctly under some warnings or failures. please see the below link for the answer.
https://stackoverflow.com/a/50208379/1283198

Resources