vite rebuilds dev server on every http request, causing graphql schema duplicates instantly crashing server -- build and production works, repo inside - nestjs

I'm trying to build a graphql server using nestjs and using vite + swc as the compiler/builder for performance reasons, webpack would take 50-60+ seconds on each rebuild on a big project, SWC/vite seems to cut that down by a factor of 5 at least.
Here's a repository that reproduces the issue with a basic 'health check' endpoint and graphql query.
The main tools concerning this:
"#nestjs/graphql": "10.0.9",
"#nestjs/apollo": "10.0.9",
"typescript": "4.7.4",
"vite-plugin-node": "1.0.0",
"vite": "2.9.13",
"#swc/core": "1.2.207",
"vite-tsconfig-paths": "3.5.0"
Now, I have played around with these fixed versions trying out various combinations of older versions. But I've narrowed down the flaw to be a problem with vite specifically.
There's this github issue opened over a month ago that's probably directly related, with this being merely a symptom of that issue.
If you build the app and serve it, everything works fine, because the production version calls the bootstrap() function which is not handled by the vite development server.
This is also a nestjs-specific problem due to nestjs doing the code-first approach.
I'm trying to patch this issue somehow by attempting three things:
stop the development server from rebuilding on every request
configure the development server to cleanup after itself on every request
configure nestjs's graphql in a way that only builds the schema 1 time, something as simple as:
let built = false;
if(!built) { buildSchema(); built = true; }
I'm counting on that built variable not changing between requests, but if it does, I might find a way to tie it to the start command via a file outside of vite's scope.
Thank you.

Related

NuxtJS & Firebase : upgrading to NodeJS 16 engine breaks Firestore listener (Firebase rules)

I've been using NuxtJS (v2.15.8) with Nuxt Firebase (v7.6.1), running on NodeJS engine 12 (v12.21.0 to be exact) for the web application I've been developping incrementally for the past couple of years and my web app is now quite complex.
I am trying to upgrade NodeJS to the latest LTS version (v16.13.2) and encounter one major issue after switching version of NodeJS (using nvm) and changing the package.json of my five packages from node 12 to node 16 :
package.json :
"engines": {
"node": "16",
..
},
When running exactly the wame web application after these changes, it starts correctly but Firebase Rules seem to break, with this error FirebaseError: false for 'get' # L61, false for 'get' # L268.
It is a cryptic error, but from experience and from all I could find online, it happens when a call to Firestore that gets blocked by defined Firebase Security rules). In my case, it happens on a "onSnapshot" call to listen to the changes of the currently logged in user. Some other calls to Firestore (using "get" and not "onSnapshot") seem to work fine, and the Firebase Authentication works well too.
Here is the full error stack :
loggedInUser.js?384a:65 Error listening to user changes
FirebaseError: false for 'get' # L61, false for 'get' # L268
at new n (prebuilt-306f43d8-45d6f0b9.js?23bd:188:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:10426:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:10427:1)
at n.onMessage (prebuilt-306f43d8-45d6f0b9.js?23bd:10449:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:10366:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:10397:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:15160:1)
at eval (prebuilt-306f43d8-45d6f0b9.js?23bd:15218:1)
The portion of code triggerring the error is :
listenUser({ commit }, userId) {
const userRef = this.$fire.firestore.collection('users').doc(userId);
userListener = userRef.onSnapshot(function(userDoc) {
if (userDoc.exists) {
const user = userConverter.fromFirestoreData(userDoc.data());
commit('SET_LOGGED_IN_USER', user);
}
},
function(error) {
console.error("Error listening to user changes", error);
});
},
As soon as I revert back to Node 12, the same call works fine and isn't blocked by the Firebase rules, so the error doesn't appear.
I therefore have several questions :
Does anyone understand what's happening there ? Is there known changes in the behavior of Firebase rules directly related to the NodeJS engine ?
Do you think this issue can come from Nuxt or its Nuxt Firebase module are not working correctly under NodeJS 16 ?
It is required to also upgrade NuxtJS to a newer version or should it be possible to simply update the Node Engine ?
Is it required to update to a newer version of Firebase (modular implementation) despite the Nuxt Firebase module stating :
"This module does not support the new modular syntax from Firebase v9+. If you plan to use the new modular mode of Version 9, we advise you to implement Firebase manually as described in the following medium article. It is currently unclear when, and if, this module will support the new modular mode."
Source : their Github repo
Any help to understand what's going on here is welcome !!
Thanks a lot for your help !
Regarding your questions:
I'm unaware of what is causing this issue but there are no known changes in the behavior of Firebase Rules depending on the NodeJS version you are using.
It's hard to assess without having more information. However I deployed a sample NuxtJS app following this guide on NodeJS 16 and it worked. Additionally the error code, as you mentioned, is caused when a Firestore Rule blocks a query. Therefore I think the root cause might be in the NuxtJS firebase module.
I wasn't able to find any documentation suggesting that you need to upgrade NuxtJS when upgrading NodeJS. Additionally you mentioned that you are using version 2.15.8 of NuxtJS which according to this release notes is the latest version.
I'm unsure on further support for NuxtJS considering that statement, but according to this Firebase documentation it is recommended to upgrade to version 9.
If you decide to attempt to upgrade to firebase v9 make sure to also upgrade Nuxt Firebase module to version 8.0.0 or higher, this version provides support to the compat library so you can use Firebase v9 although still with the old syntax, more information can be found here.
Lastly, if you'd like to test if a Firebase rule is working as expected you can quickly test it using the Rules Playground.
Long story short : upgrading to Firebase v9 worked.
Before I did that, I got stuck with rules preventing me to access firestore documents as soon as I tried running the project under Node16 engine.
So I had to do the following changes :
updating Firebase to v9
implement the configuration through a plugin rather than the nuxt-firebase module
make all the required changes in my code to make use of v9 modular (I didn't try using the compat version)
Now that I use the latest version of Firebase, I tried again switching to NodeJS 16 and it runs fine, including the Firebase security rules.

PG (Node-Postgres) Pool Hangs on Connect ... But Only Inside Gatsby?

NOTE: This is mainly a question about the pg or Node-PostgreSQL module. It has details from Gatsby and Postgraphile, but I don't need expertise in all three, just pg.
I have a database that works great with a PostGraphile-using Express server. I can also acces it via node at the command line ...
const { Pool } = require("pg");
const pool = new Pool({ connectionString: myDbUrl });
pool.connect().then(() => console.log('connected'));
// logs 'connected' immediately
The exact same database also previously worked great with Gatsby/PostGraphile via the gatsby-source-pg plug-in ... but recently I changed dev machines, and when I try to build or run a dev server, Gatsby hangs on the "source and transform nodes" step. When I debug it, it's hanging on a call to pool.connect().
So I literally have two codebases both using PostGraphile, both with the same config, and one works and the other doesn't. Even stranger, if I edit the source code of the Gatsby plug-in in node_modules, to make it use the exact same code (which I can run at the command line successfully) ... it still hangs.
The only thing I can think of is that some other Gatsby plug-in is using up all the connections and not releasing them, but as far as I can tell (eg. by grep-ing through node_modules) no other plug-in even uses pg.
So really I have two questions:
A) Can anyone help me understand why connect would hang? Bonus points if you can help me understand why it would do so with a known-good config and only inside Gatsby (after some environmental factor changed)?
B) Can anyone help me fix it? If it might be some sort of "previous code forgot to release connections" issue, is there any way I can test for that? If I could just log new Pool().areYouBroken() somehow that would be amazingly useful.
Try:
npm install pg#latest
This is what got my pool/connection to start working as expected.
Annoying answer: because of a bug (thank you #charmander). For further details see: https://github.com/brianc/node-postgres/issues/2300
P.S. I never did find any sort of new Pool().areYouBroken() function.

How to prevent Mocha from preserving require cache between test files?

I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.

Relay/GraphQL Schema cache not updating when I update schema on server side

I have a React app using Relay and a remote GraphQL server. When I start the webpack server, I have it fetch the latest schema and feed it into the babel-relay-plugin.
It works great....except when I make a schema change. It appears React or Relay or webpack or something is caching the schema, because I'll get a Schema validation error in the browser console when I run the app. However, when I run the query manually against the GraphQL server using GraphIQL, the query is successful. So it would have to be some sort of cache on the react, relay, webpack side I'm thinking?
Things I've tried:
List item
Restarting webpack server
Removing node_modules and npm install
I've even tried restarting my computer (that actually seemed to work, but may be coincidence)
Thanks in advance for your help.
Turns out, of course, it was human error. I had cacheDirectory as true in my babel-loader query. You can read about it on the babel-loader readme (just do a find on page for 'cacheDirectory') https://github.com/babel/babel-loader
Once I changed that to false, which is the default. The problem went away. Hope that helps others.
This happened to me when I switched to Webpack 2.
The solution in my case was to move the babelRelayPlugin to be the first plugin to execute in .babelrc.
I'm not exactly sure on the why though.

How to test an AngularJS/SocketStream/Node.js app using Karma

I am working on an AngularJS application that is delivered by a SocketStream/node.js server.
I have an AngularJS service that calls api functions on the SocketStream server and progress has been good so far.
But now the time has come to start writing the first tests and the first testing framework that came to mind is Karma/Jasmine, since this is the recommend AngularJS set up.
So far so good, but since my AngularJS modules are imported using 'require' (SocketStream's version, not require.js) and server api calls are part of the test, I need to configure Karma to load SocketStream (at least its client side).
I took a good look at 'https://github.com/yiwang/angular-phonecat-livescript-socketstream' but when I run this example I get run time errors, possibly because I have later versions of variuous dependencies installed.
I managed to get 'required' resolved by packing my SocketStream app by adding 'ss.client.packAssets()' to app.js and run 'SS_PACK=1 node app.js', but when I start karma it logs an error message saying:
'Chrome 23.0 (Linux) ERROR
Uncaught TypeError: undefined is not a function
at /the...path/client/static/assets/app/1368026081351.js:25'
'1368026081351.js' is the SocketStream packed assets file. If I don't load it the error message is something like 'require is undefined', so my best guess is that the error is happening somewhere inside the SocketStream require code. Also because I run karma in DEBUG mode and can see all the files being served.
I have been trying different approaches as to find out what is happening but to now avail. So my questions are:
Is anybody else successfully testing AngularJS/SocketStream using Karma?
Does anybody have any suggestions as to how I can fix, or at least debug this problem?
Are there any alternatives/better solutions?
Time to answer, sort of, my own question:
Sort of, because I came to the conclusion that Karma and node.js/SocketStream have a lot of overlap, so I decided to see if I can omit Karma altogether and deliver the Jasmine testing platform through SocketStream. It turns out that that is possible and here's how I did it:
I defined a new SocketStream route and client in my 'app.js' file:
ss.client.define( 'test', {
view: 'SpecRunner.html',
css: ['libs/test'],
code: ['libs', 'tests', 'app'],
tmpl: 'none'
});
ss.http.route( '/test', function(req, res) {
res.serveClient( 'test' );
});
I downloaded jasmine-standalone-1.3.1.zip and copied 'SpecRunner.html' to the 'client/views' folder. I then edited it to make it load AngularJS and all SocketStream client files, like all other views:
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular-resource.min.js"></script>
<SocketStream/>
I removed the 'script' tags that import the sample source files ( 'Player.js' and 'Song.js' ) and specs but let the last 'script' block in place unmodified.
I then created a new folder inside 'client/css/libs' called 'test' and copied 'jasmine.css' in there unmodified.
Then I copied 'jasmine.js' and 'jasmine-html.js' renamed to '01-jasmine.js' and '02-jasmine-html.js' but otherwise unmodified, into '/client/code/libs'.
Now Jasmine is in place and will be invoked by using the '/test' route. The slightly unsatisfactory bit is that I haven't found an elegant place to store my spec files. They only work so far if I place them inside the 'libs' folder. Anywhere else and they are served by SocketStream as modules and are not run.
But I can live with that for now. I can run Jasmine tests without having to configure a special Karma setup.

Resources