I've recently migrated my project to Angular 13. It builds locally on my Mac, however it's started to fail when I run it in my Docker build container (tested both locally and on our CI/CD server.
It's complaining about the DeckGL import:
Cannot find module '#deck.gl/layers' or its corresponding type declarations.
1 import * as Layers from '#deck.gl/layers';
Cannot find module '#deck.gl/geo-layers' or its corresponding type declarations.
2 import * as GeoLayers from '#deck.gl/geo-layers';
Cannot find module '#deck.gl/aggregation-layers' or its corresponding type declarations.
3 import * as AggregationLayers from '#deck.gl/aggregation-layers';
I've used npm list to ensure the dependencies are the same on my Mac and within the Alpine container and have also tested using the same Node version (and have tried a couple of different Node / Apline images), however, the issue persists, whilst still working locally.
Any ideas what could be causing this?
Really silly issue, but posting an answer in case someone runs into a similar issue in future.
I had accidentally run npm install in a top-level directory, and had installed some Node modules that didn't end up in my package.json file. In my case, I had the core deck.gl module installed in my package.json, but none of the layer packages (which are separate npm modules).
When I was running it locally, it was finding the correct dependency, but when running it on the CI/CD environment, it was obviously failing.
Related
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Ventura 13.2.1
TensorFlow.js installed from (npm or script link): npm
TensorFlow.js version: 4.2.0
Describe the problem
I had previously used the #tensorflow/tfjs-node library successfully with no issues. Now, however, I have changed environments and unfortunately I am running into issues. I don't have the original source code any more to compare to see what the difference is.
I am running under node v18.4.0, although I have tried v16.16.0 and v14.21.2 (using nvm on macOS).
The problem is now it seems the library wants all sorts of supporting modules, which are not installed:
aws-sdk
nock
mock-aws-s3
It will not build without these, so I npm i them and then it builds happily.
So first question: why might my environment now be needing these where it didn't need them before? What might be different? As I say, sadly I don't have the last environment around any more.
Having built this, I now receive the error:
Error: #package/learn package.json is not node-pre-gyp ready:
package.json must declare these properties:
binary
Although I could configure all this, I think it is the wrong thing to do, since I had it up and running previously without any of this. Is where are these issues now coming from?
Can I run the #tensorflow/tfjs-node without installing or needing those three dependencies? I can provide the other config if it is not apparent to someone who has had the same issue.
I have updated my custom library and UI Angular applications to version 15 but now when I try to run "ng serve" it says it can't find the module for my custom library. We are working with Artifactory for our repositorys. And I noticed that since I updated the Jenkins build to use Node 16 it packages the npm package differently.
So I'm curious if anyone has come across this before. I was able to consume the library without any issues when it was version 11, but after the update to 15 I get the following error.
Error: Module not found: Error: Can't resolve '#cto_compliance_amf/amf-library' in 'C:\Users\zkafpf7\Documents\MyProjects\amf_ui\src\app'
Any and all help is appreciated.
Turns out that when I updated to Node 16 for the Jenkins build, the babel script wasn't working correctly. I updated my pre-pack script to use shx instead of babel to move the files from the dist folder to the root folder. Then updated the .npmignore file to omit the files that weren't needed in the package. And now everything works. Hope this helps someone else.
Recently I have updated the node version to 16+. Prior to that, I was able to trigger the yarn build command to create the build of my project.
But after installation of node 16+, the yarn build command is throwing the following errors
./lib/view-registration.js
Module not found: Error: Can't resolve 'hoisted/#msdyn365-commerce-modules/wishlist/dist/lib/modules/wishlist-items/wishlist-items.view.js' in 'H:\source\D365_eCommerce\lib'
# ./lib/view-registration.js 5:112769-113063
# ./node_modules/#msdyn365-commerce/bootloader/entry/client.js
# multi ./node_modules/#msdyn365-commerce/bootloader/entry/webpack-public-path.js ./node_modules/#msdyn365-commerce/bootloader/entry/client
It seems like it is trying to pick the module from the hoisted folder.
I am new to this concept so have no idea why it is targeting this folder in spite of this it should pick the module from '#msdyn365-commerce-modules/wishlist/dist/lib/modules/wishlist-items/wishlist-items.view.js' directly.
Any explanation would be appreciated.
How can I force it to not pick the module directly from the hoisted folder and use '#msdyn365-commerce-modules/wishlist/dist/lib/modules/wishlist-items/wishlist-items.view.js' directly
Thanks,
Aman
This issue mainly occurred due to the corrupt version of Node.
The only possible solution for this is to make sure that all the dependencies required for Node (16+) are also installed.
We can either download all the dependencies manually or just need to allow the node itself to download all the dependencies at the time of installation.
Thanks,
Aman
I am working on an application via the toolchain tool on IBM Cloud and editing the code via the Eclipse Orion IDE. As I am not accessing this through my local cli, my understanding is that in order to so call npm install {package}, I would just need to include the package in the package.json file under dependencies and require it in my app. However, when I load the application, I get the require is not defined indicating that the package has not been installed. Moreover, the require() is being used in the app.js file with the application being launched but not from files in my public directory.
After playing around further, it seems it might have to do with the way the directory tree is being traced as the error is only thrown in subdirectories. For example, require('express') works in app.js which is in the main directory ./ but fails when it is called in test.js in ./subdirectory/test.js. I feel like I'm missing something painfully simple like configuration of endpoint or something.
I've been searching around but I can't seem to find how to get the packages loaded, preferably without using the cli. Appreciate any pointers. Thanks!
Update: After playing around further, I am also getting module is not defined error when trying to require from another file in the same directory. For example module.exports = 'str' returns this error. While trying to require('./file') returns the require is not defined. It might have to do with how node is wrapping the functions?
Update 2: Tried "start": "npm install && node app.js" in package.json but no luck. Adding a build stage which calls npm install before deployment also does not work
Update 3: After adding npm install build stage, I am able to see that the dependencies have been successfully built via the logs. However, the require is not defined error still persists.
Update 4: Trying npm install from my CLI doesn't work as well even though all packages and dependencies are present
Update 5: Running cf restage or configuring cache via cacheDirectories does not work as well
Opened a related question regarding deployment here
Found out my confusion was caused due to me not realizing that require() cannot be used on the client side unless via tools such as Browserify.
I have written a Electron application using Node, Electron Boilerplate, and phantom. It works perfectly fine for me on my linux machine, I copied the source over to Windows 10, and ran with npm start, and all goes smoothly.
However, when I try to build the application with the boilerplate module using npm run release, things go a little less smoothly. I can install and open the application just fine, but when I click the button that activates the phantom module, the windows goes all white and nothing happens. I was able to logs some errors with the dev tools.
First, I have:
C:\...\dist\win-unpacked\resources\app.asar\node_modules\phantom\lib\phantom.js:361
Uncaught (in promise) Error: Error reading from stdin: Error: write EPIPE(…)
I did some research into similar issues, namely here, and it seems to me the issue is starting the child process, PhantomJS, with the npm module phantom. Originally, I was using a WPF application I wrote in C# to start the process, and that worked just fine. This leads me to believe that the phantom module is the culprit.
So I tried swapping out the npm phantom module for horseman, but got similar results:
Unhandled rejection HeadlessError: Phantom immediately exited with: 4294967295
at ChildProcess.immediateExit (C:\...\dist\win-unpacked\resources\app.asar\node_modules\node-horseman\node_modules\node-phantom-simple\node-phantom-simple.js:153:23)
at ChildProcess.g (events.js:286:16)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:204:12)
Here is a shot in the dark. I am not positive this will solve your issue but here it goes:
GYP and miss-matched binaries
Phantom and many other node modules use binaries built for the specific OS that it will be running on. Sometimes in your npm log files you will see references to node-gyp. Node-gyp simply helps to build native add-on's in node modules. When the binaries are built they are usually built against, among others, three main parameters, the operating system, cpu architecture and version of node that is doing the installation.
I think you need to rebuild phantomjs to the version of node Electron is using. Most of the time the node version you have installed on your machine and the node version running in Electron are not the same. Electron does its best to keep up, but there is always a little lag because of the amount of work and testing required to keep up-to-date.
When you install phantom by running npm install phantom it will assume it needs to install or build the binaries for the node version your machine is using. Then when your Electron app tries to run phantom it tries to call the binary of the Electron's node version. When it isn't there the child process immediately exits with an error.
How to fix
Luckily, there are other people out there that have figured out how to fix this issue and have created a great tool to help generate the correct binaries.
Enter electron-rebuild:
https://github.com/electron/electron-rebuild
Electron-rebuild can be run in the command line, and it will rebuild all of your native modules to the version of Electron your project is using.
To install:
npm install electron-prebuilt
To use (in Windows):
.\node_modules\.bin\electron-rebuild.cmd
This should be enough to put the correct binaries in the right place.
Other thoughts
Sometimes you can use a package that uses a dependency called node-pre-gyp. E.g. sqlite3. There is a known issue I ended up running into when trying to rebuild my packages for Electron. Basically, in order to avoid this issue (if you run into it) just append --pre-gyp-fix to the above command.
Tangent for those who run into the pre-gyp-fix issue
One more thing on the pre-gyp-fix: If one or more of your dependencies depends on one of the modules that need the pre-gyp-fix then they will be looking for the binary in the wrong place even if they are running in Electron. All of the pre-gyp binaries are stored in a folder similar to this:
.\node_modules\sqlite3\lib\binding
In my current project I have three folders here, one for Electron-v1.4, and two for node-v46 and node-v50. (hack alert) In order to have sqlite3 work with my other dependencies I copy the binary found in the Electron-v1.4 folder and put it in both node-v* folders. That way when running in Electron, all dependencies are running the correct binaries even though they are looking for them in the wrong place. (end hack alert)
Conclusion
There is no way I can be sure this has anything to do with the issue you are seeing. But it is worth a shot to see if it fixes your problem. If not then at least I hope I can help someone else experiencing the same issues I ran into.