Yarn 2 workspaces, PNP, and unit tests in child workspaces - node.js

I have a parent workspace A that has a dependency on a child workspace B. I am using PnP.
If I run yarn install in A's directory, the file .pnp.cjs is created. That's expected because I need to ensure that when packages A and B depend on a third-party package like graphql-js, only a single instance of graphql-js is loaded when A runs, or else this happens.
However, maybe I have unit tests or some yarn script I want to run directly inside project B. When I try to run the script, it complains: Usage Error: The project in B/package.json doesn't seem to have been installed - running an install there might help. OK, fine, so I run yarn install in B's directory, which creates a .pnp.cjs file in the directory. Then I try to run the tests/script again in B:
Error: Unable to locate pnpapi, the module 'B/[file]' is controlled by multiple pnpapi instances.
This is usually caused by using the global cache (enableGlobalCache: true)
Controlled by:
B/.pnp.cjs
A/.pnp.cjs
How is it possible to run unit tests or scripts in B, which requires B to have its own .pnp.cjs file, and still have dependencies controlled by A to avoid multiple instances of 3rd-party packages when A runs?

Related

Can you use Yarn2 PnP "zero-installs" on a machine without Yarn?

I'm playing with Yarn 2's "zero install" stuff for a minor tool to address one of my boss's random allergies. (He takes umbrage at the expectation of having to run npm i on a cloned repo to make it work and insists putting node_modules into version control is somehow not a godawful idea; so I want to use this as an excuse to sneak in Yarn and also stop him from powering that through.)
As I understand, what "zero install" basically means is Yarn tries to make putting dependency installation state into VCS actually feasible. However, to run the actual app, Yarn needs to replace Node's dependency resolution with its PnP mechanism. This happens automagically for Node instances run from Yarn scripts, but running Yarn scripts requires Yarn to be available. (And remember, we're trying to solve the problem of somebody being arbitrarily stubborn about installing things.)
The best I have is making my start script be npx yarn node app.js, but that feels unnecessarily convoluted; after all, with Yarn 2, the tool itself is stored in .yarn/releases and the global yarn command uses that, but that's a huge minified blob of some bundler's output, I don't know how I'd begin invoking that.
To register PnP runtime produced by Yarn it is enough to just require .pnp.js from command line, so you can run your app.js via:
node -r ./.pnp app.js
There is another way to do the same: you can require .pnp.js from within app, but when you do it not from command line, you must also call setup function on returned PnP API instance, just add this line on top of app.js:
require('./.pnp').setup();

Using Peer Dependencies With Local (file:../some-lib) Dependencies

I have a monorepo that has many micro-services in it. There are some library-type functions / classes that I want to make available to any micro-service that needs it. However, if that library package declares a peer dependency, the peer dependency is not found when running code from within the thing that depends on the library.
Consider this repo structure:
lib
some-library (peerDepends on foo)
index.js (requires foo)
node_modules will be empty
services
some-service (depends on foo, and some-library)
index.js (requires some-library)
node_modules will have:
foo
some-library will be a symlink to ../../lib/some-library
When running node services/some-service/index.js, you'll get the error "Cannot find module 'foo'", emanating from lib/some-library/index.js.
Presumably this happens because node is only looking at lib/some-library/node_modules and any node_modules folder that is in an ancestor directory. But since this code was run from services/some-service (as the working directory), and because of the symlink in services/some-service/node_modules, I would've expected this to work.
Here's a repo you can easily clone to see the problem: https://github.com/jthomerson/example-local-dependency-problem
git clone git#github.com:jthomerson/example-local-dependency-problem.git
cd example-local-dependency-problem
cd services/some-service
npm install
node index.js
I only see two solutions:
Don't use peerDependencies inside the library
Install each peer dependency at the root of the project for the sake of local development and testing.
Neither of those is a real great solution because it doesn't allow each service to have different versions of the dependencies, and thus means that if the local version (or the library's version) of a dependency is bumped, all services that uses the library then have their dependencies version bumped at the same time, which makes them more brittle because they're all tied together.
How about adding the --preserve-symlinks flag?
E.g.:
node --preserve-symlinks index.js
Here's a link to the docs
I had a dual-workspace setup in which:
workspace1
shared-library
module-library (peer depends on shared-library)
workspace2
main-app (depends on module-library and shared-library)
Now dependencies for workspace projects are defined in the tsconfig.base.json file under compilerOptions.paths.
However for workspace2 which is unrelated to workspace1, I install the packages (both via file:. When I then build main-app I get the error that module-library is unable to find shared-library (even though it's installed in workspace2.
I had to add ./../workspace1/dist/shared-library to the compilerOptions.paths in tsconfig.base.json of workspace2 (note the reference to workspace1).
This obviously couples the workspaces on my filesystem. But for development purposes this is perfect.

Webpack/babel bundling outdated code for local linked packages after first run

When trying to develop a library, npm link and npm link [libaryname] can be used to connect local packages for debugging.
For some reason when using it with the webpack dev server, it does not update the consumed package in the consumer project. The first time the project is run, it correctly imports the linked package. On subsequent runs the imported module stays the same as it was the first time it was run, despite the code of the package in node_modules being correct and up to date. It never changes from its initial state on the first run, even when removing and re-installing the referenced package.
Is there some caching mechanism at work that can be disabled? If not, where does webpack get the outdated sources from?

Docker compose v3 named volume & node_modules from npm install

Using compose v3.
In the build I copy package.json and run npm install into
/var/www/project/node_modules
I dont add any code in the build phase.
In compose I add volumes
- ./www:/var/www/project/www
As everyone knows the host bind to /www will effectively "overwrite" the node_modules I installed during the build phase.
Which is why we add a named module afterwards
- ./www:/var/www/project/www
- modules:/var/www/project/www/node_modules
this works fine and dandy the first time we build/run the project
since the named volume "modules" doesnt exist, the www/node_modules from the build phase will be mounted instead.
HOWEVER, in this is the actual issue.
The next time I make a change to package.json and do:
docker-compose up --build
I can see how the new npm modules are installed but once the named "modules" volume is attached (it now exists stuff there from the previous run) it "overwrites" the newly installed modules in the image.
the above method of adding a named volume is suggested in tons of places as a remedy for the node modules issue. But as far as I can see from lots of testing this only works once.
if I were to rename the named volume every time I make a change to package.json it would of course work.
A better thing would be to include rm command in your entrypoint script to clean out node modules before running npm install.
As an alternative, you can use $ docker system prune before running another build. This will make sure that no earlier things are being used.

Run jasmine specs via phantom, got "Can't find variable: describe"

I met same problem when run it on a centOS5 linux server as CI job, with phantomJS 1.7 (I compiled it by myself)
Running "jasmine" task
Testing jasmine specs via phantom
...
[D] ["phantomjs","onResourceReceived","GET
http://127.0.0.1:8888/test/spec/CommonTest.js"]
[D] ["phantomjs","onError","ReferenceError: Can't find variable: describe", [{"file":"http://127.0.0.1:8888/test/spec/CommonTest.js","line":31,"function":""}]]
ReferenceError: Can't find variable: describe
...
the specs run successfully on other machine such as WinXP etc.
At last, it’s caused by that: I created a symbolic link for “node_modules” to re-use common module, but it preventgrunt-jasmine-runner to fetch sth. (not sure yet). To resolve it, need to copy the “node_modules” files directly under project folder.

Resources